Jan 27 14:00:26 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 27 14:00:26 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 27 14:00:26 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 14:00:26 localhost kernel: BIOS-provided physical RAM map:
Jan 27 14:00:26 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 27 14:00:26 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 27 14:00:26 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 27 14:00:26 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 27 14:00:26 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 27 14:00:26 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 27 14:00:26 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 27 14:00:26 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 27 14:00:26 localhost kernel: NX (Execute Disable) protection: active
Jan 27 14:00:26 localhost kernel: APIC: Static calls initialized
Jan 27 14:00:26 localhost kernel: SMBIOS 2.8 present.
Jan 27 14:00:26 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 27 14:00:26 localhost kernel: Hypervisor detected: KVM
Jan 27 14:00:26 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 27 14:00:26 localhost kernel: kvm-clock: using sched offset of 9273653710 cycles
Jan 27 14:00:26 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 27 14:00:26 localhost kernel: tsc: Detected 2800.000 MHz processor
Jan 27 14:00:26 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 27 14:00:26 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 27 14:00:26 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 27 14:00:26 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 27 14:00:26 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 27 14:00:26 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 27 14:00:26 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 27 14:00:26 localhost kernel: Using GB pages for direct mapping
Jan 27 14:00:26 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 27 14:00:26 localhost kernel: ACPI: Early table checksum verification disabled
Jan 27 14:00:26 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 27 14:00:26 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 14:00:26 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 14:00:26 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 14:00:26 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 27 14:00:26 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 14:00:26 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 14:00:26 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 27 14:00:26 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 27 14:00:26 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 27 14:00:26 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 27 14:00:26 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 27 14:00:26 localhost kernel: No NUMA configuration found
Jan 27 14:00:26 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 27 14:00:26 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 27 14:00:26 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 27 14:00:26 localhost kernel: Zone ranges:
Jan 27 14:00:26 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 27 14:00:26 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 27 14:00:26 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 27 14:00:26 localhost kernel:   Device   empty
Jan 27 14:00:26 localhost kernel: Movable zone start for each node
Jan 27 14:00:26 localhost kernel: Early memory node ranges
Jan 27 14:00:26 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 27 14:00:26 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 27 14:00:26 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 27 14:00:26 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 27 14:00:26 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 27 14:00:26 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 27 14:00:26 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 27 14:00:26 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 27 14:00:26 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 27 14:00:26 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 27 14:00:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 27 14:00:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 27 14:00:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 27 14:00:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 27 14:00:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 27 14:00:26 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 27 14:00:26 localhost kernel: TSC deadline timer available
Jan 27 14:00:26 localhost kernel: CPU topo: Max. logical packages:   8
Jan 27 14:00:26 localhost kernel: CPU topo: Max. logical dies:       8
Jan 27 14:00:26 localhost kernel: CPU topo: Max. dies per package:   1
Jan 27 14:00:26 localhost kernel: CPU topo: Max. threads per core:   1
Jan 27 14:00:26 localhost kernel: CPU topo: Num. cores per package:     1
Jan 27 14:00:26 localhost kernel: CPU topo: Num. threads per package:   1
Jan 27 14:00:26 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 27 14:00:26 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 27 14:00:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 27 14:00:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 27 14:00:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 27 14:00:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 27 14:00:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 27 14:00:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 27 14:00:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 27 14:00:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 27 14:00:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 27 14:00:26 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 27 14:00:26 localhost kernel: Booting paravirtualized kernel on KVM
Jan 27 14:00:26 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 27 14:00:26 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 27 14:00:26 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 27 14:00:26 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 27 14:00:26 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 27 14:00:26 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 27 14:00:26 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 14:00:26 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 27 14:00:26 localhost kernel: random: crng init done
Jan 27 14:00:26 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 27 14:00:26 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 27 14:00:26 localhost kernel: Fallback order for Node 0: 0 
Jan 27 14:00:26 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 27 14:00:26 localhost kernel: Policy zone: Normal
Jan 27 14:00:26 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 27 14:00:26 localhost kernel: software IO TLB: area num 8.
Jan 27 14:00:26 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 27 14:00:26 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 27 14:00:26 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 27 14:00:26 localhost kernel: Dynamic Preempt: voluntary
Jan 27 14:00:26 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 27 14:00:26 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 27 14:00:26 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 27 14:00:26 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 27 14:00:26 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 27 14:00:26 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 27 14:00:26 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 27 14:00:26 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 27 14:00:26 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 14:00:26 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 14:00:26 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 14:00:26 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 27 14:00:26 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 27 14:00:26 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 27 14:00:26 localhost kernel: Console: colour VGA+ 80x25
Jan 27 14:00:26 localhost kernel: printk: console [ttyS0] enabled
Jan 27 14:00:26 localhost kernel: ACPI: Core revision 20230331
Jan 27 14:00:26 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 27 14:00:26 localhost kernel: x2apic enabled
Jan 27 14:00:26 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 27 14:00:26 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 27 14:00:26 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 27 14:00:26 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 27 14:00:26 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 27 14:00:26 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 27 14:00:26 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 27 14:00:26 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 27 14:00:26 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 27 14:00:26 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 27 14:00:26 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 27 14:00:26 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 27 14:00:26 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 27 14:00:26 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 27 14:00:26 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 27 14:00:26 localhost kernel: x86/bugs: return thunk changed
Jan 27 14:00:26 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 27 14:00:26 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 27 14:00:26 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 27 14:00:26 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 27 14:00:26 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 27 14:00:26 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 27 14:00:26 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 27 14:00:26 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 27 14:00:26 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 27 14:00:26 localhost kernel: landlock: Up and running.
Jan 27 14:00:26 localhost kernel: Yama: becoming mindful.
Jan 27 14:00:26 localhost kernel: SELinux:  Initializing.
Jan 27 14:00:26 localhost kernel: LSM support for eBPF active
Jan 27 14:00:26 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 27 14:00:26 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 27 14:00:26 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 27 14:00:26 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 27 14:00:26 localhost kernel: ... version:                0
Jan 27 14:00:26 localhost kernel: ... bit width:              48
Jan 27 14:00:26 localhost kernel: ... generic registers:      6
Jan 27 14:00:26 localhost kernel: ... value mask:             0000ffffffffffff
Jan 27 14:00:26 localhost kernel: ... max period:             00007fffffffffff
Jan 27 14:00:26 localhost kernel: ... fixed-purpose events:   0
Jan 27 14:00:26 localhost kernel: ... event mask:             000000000000003f
Jan 27 14:00:26 localhost kernel: signal: max sigframe size: 1776
Jan 27 14:00:26 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 27 14:00:26 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 27 14:00:26 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 27 14:00:26 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 27 14:00:26 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 27 14:00:26 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 27 14:00:26 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 27 14:00:26 localhost kernel: node 0 deferred pages initialised in 13ms
Jan 27 14:00:26 localhost kernel: Memory: 7763888K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618356K reserved, 0K cma-reserved)
Jan 27 14:00:26 localhost kernel: devtmpfs: initialized
Jan 27 14:00:26 localhost kernel: x86/mm: Memory block size: 128MB
Jan 27 14:00:26 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 27 14:00:26 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 27 14:00:26 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 27 14:00:26 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 27 14:00:26 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 27 14:00:26 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 27 14:00:26 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 27 14:00:26 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 27 14:00:26 localhost kernel: audit: type=2000 audit(1769522424.755:1): state=initialized audit_enabled=0 res=1
Jan 27 14:00:26 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 27 14:00:26 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 27 14:00:26 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 27 14:00:26 localhost kernel: cpuidle: using governor menu
Jan 27 14:00:26 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 27 14:00:26 localhost kernel: PCI: Using configuration type 1 for base access
Jan 27 14:00:26 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 27 14:00:26 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 27 14:00:26 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 27 14:00:26 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 27 14:00:26 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 27 14:00:26 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 27 14:00:26 localhost kernel: Demotion targets for Node 0: null
Jan 27 14:00:26 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 27 14:00:26 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 27 14:00:26 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 27 14:00:26 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 27 14:00:26 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 27 14:00:26 localhost kernel: ACPI: Interpreter enabled
Jan 27 14:00:26 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 27 14:00:26 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 27 14:00:26 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 27 14:00:26 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 27 14:00:26 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 27 14:00:26 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 27 14:00:26 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [3] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [4] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [5] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [6] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [7] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [8] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [9] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [10] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [11] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [12] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [13] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [14] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [15] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [16] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [17] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [18] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [19] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [20] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [21] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [22] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [23] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [24] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [25] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [26] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [27] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [28] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [29] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [30] registered
Jan 27 14:00:26 localhost kernel: acpiphp: Slot [31] registered
Jan 27 14:00:26 localhost kernel: PCI host bridge to bus 0000:00
Jan 27 14:00:26 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 27 14:00:26 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 27 14:00:26 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 27 14:00:26 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 27 14:00:26 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 27 14:00:26 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 27 14:00:26 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 27 14:00:26 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 27 14:00:26 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 27 14:00:26 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 27 14:00:26 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 27 14:00:26 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 27 14:00:26 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 27 14:00:26 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 27 14:00:26 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 27 14:00:26 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 27 14:00:26 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 27 14:00:26 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 27 14:00:26 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 27 14:00:26 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 27 14:00:26 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 27 14:00:26 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 27 14:00:26 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 27 14:00:26 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 27 14:00:26 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 27 14:00:26 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 27 14:00:26 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 27 14:00:26 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 27 14:00:26 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 27 14:00:26 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 27 14:00:26 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 27 14:00:26 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 27 14:00:26 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 27 14:00:26 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 27 14:00:26 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 27 14:00:26 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 27 14:00:26 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 27 14:00:26 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 27 14:00:26 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 27 14:00:26 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 27 14:00:26 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 27 14:00:26 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 27 14:00:26 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 27 14:00:26 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 27 14:00:26 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 27 14:00:26 localhost kernel: iommu: Default domain type: Translated
Jan 27 14:00:26 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 27 14:00:26 localhost kernel: SCSI subsystem initialized
Jan 27 14:00:26 localhost kernel: ACPI: bus type USB registered
Jan 27 14:00:26 localhost kernel: usbcore: registered new interface driver usbfs
Jan 27 14:00:26 localhost kernel: usbcore: registered new interface driver hub
Jan 27 14:00:26 localhost kernel: usbcore: registered new device driver usb
Jan 27 14:00:26 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 27 14:00:26 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 27 14:00:26 localhost kernel: PTP clock support registered
Jan 27 14:00:26 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 27 14:00:26 localhost kernel: NetLabel: Initializing
Jan 27 14:00:26 localhost kernel: NetLabel:  domain hash size = 128
Jan 27 14:00:26 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 27 14:00:26 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 27 14:00:26 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 27 14:00:26 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 27 14:00:26 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 27 14:00:26 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 27 14:00:26 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 27 14:00:26 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 27 14:00:26 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 27 14:00:26 localhost kernel: vgaarb: loaded
Jan 27 14:00:26 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 27 14:00:26 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 27 14:00:26 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 27 14:00:26 localhost kernel: pnp: PnP ACPI init
Jan 27 14:00:26 localhost kernel: pnp 00:03: [dma 2]
Jan 27 14:00:26 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 27 14:00:26 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 27 14:00:26 localhost kernel: NET: Registered PF_INET protocol family
Jan 27 14:00:26 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 27 14:00:26 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 27 14:00:26 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 27 14:00:26 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 27 14:00:26 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 27 14:00:26 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 27 14:00:26 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 27 14:00:26 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 27 14:00:26 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 27 14:00:26 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 27 14:00:26 localhost kernel: NET: Registered PF_XDP protocol family
Jan 27 14:00:26 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 27 14:00:26 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 27 14:00:26 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 27 14:00:26 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 27 14:00:26 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 27 14:00:26 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 27 14:00:26 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 27 14:00:26 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 27 14:00:26 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 84892 usecs
Jan 27 14:00:26 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 27 14:00:26 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 27 14:00:26 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 27 14:00:26 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 27 14:00:26 localhost kernel: ACPI: bus type thunderbolt registered
Jan 27 14:00:26 localhost kernel: Initialise system trusted keyrings
Jan 27 14:00:26 localhost kernel: Key type blacklist registered
Jan 27 14:00:26 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 27 14:00:26 localhost kernel: zbud: loaded
Jan 27 14:00:26 localhost kernel: integrity: Platform Keyring initialized
Jan 27 14:00:26 localhost kernel: integrity: Machine keyring initialized
Jan 27 14:00:26 localhost kernel: Freeing initrd memory: 87956K
Jan 27 14:00:26 localhost kernel: NET: Registered PF_ALG protocol family
Jan 27 14:00:26 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 27 14:00:26 localhost kernel: Key type asymmetric registered
Jan 27 14:00:26 localhost kernel: Asymmetric key parser 'x509' registered
Jan 27 14:00:26 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 27 14:00:26 localhost kernel: io scheduler mq-deadline registered
Jan 27 14:00:26 localhost kernel: io scheduler kyber registered
Jan 27 14:00:26 localhost kernel: io scheduler bfq registered
Jan 27 14:00:26 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 27 14:00:26 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 27 14:00:26 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 27 14:00:26 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 27 14:00:26 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 27 14:00:26 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 27 14:00:26 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 27 14:00:26 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 27 14:00:26 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 27 14:00:26 localhost kernel: Non-volatile memory driver v1.3
Jan 27 14:00:26 localhost kernel: rdac: device handler registered
Jan 27 14:00:26 localhost kernel: hp_sw: device handler registered
Jan 27 14:00:26 localhost kernel: emc: device handler registered
Jan 27 14:00:26 localhost kernel: alua: device handler registered
Jan 27 14:00:26 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 27 14:00:26 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 27 14:00:26 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 27 14:00:26 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 27 14:00:26 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 27 14:00:26 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 27 14:00:26 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 27 14:00:26 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 27 14:00:26 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 27 14:00:26 localhost kernel: hub 1-0:1.0: USB hub found
Jan 27 14:00:26 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 27 14:00:26 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 27 14:00:26 localhost kernel: usbserial: USB Serial support registered for generic
Jan 27 14:00:26 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 27 14:00:26 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 27 14:00:26 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 27 14:00:26 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 27 14:00:26 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 27 14:00:26 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 27 14:00:26 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 27 14:00:26 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 27 14:00:26 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 27 14:00:26 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-27T14:00:25 UTC (1769522425)
Jan 27 14:00:26 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 27 14:00:26 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 27 14:00:26 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 27 14:00:26 localhost kernel: usbcore: registered new interface driver usbhid
Jan 27 14:00:26 localhost kernel: usbhid: USB HID core driver
Jan 27 14:00:26 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 27 14:00:26 localhost kernel: Initializing XFRM netlink socket
Jan 27 14:00:26 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 27 14:00:26 localhost kernel: Segment Routing with IPv6
Jan 27 14:00:26 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 27 14:00:26 localhost kernel: mpls_gso: MPLS GSO support
Jan 27 14:00:26 localhost kernel: IPI shorthand broadcast: enabled
Jan 27 14:00:26 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 27 14:00:26 localhost kernel: AES CTR mode by8 optimization enabled
Jan 27 14:00:26 localhost kernel: sched_clock: Marking stable (1337006400, 149344510)->(1568195770, -81844860)
Jan 27 14:00:26 localhost kernel: registered taskstats version 1
Jan 27 14:00:26 localhost kernel: Loading compiled-in X.509 certificates
Jan 27 14:00:26 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 27 14:00:26 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 27 14:00:26 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 27 14:00:26 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 27 14:00:26 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 27 14:00:26 localhost kernel: Demotion targets for Node 0: null
Jan 27 14:00:26 localhost kernel: page_owner is disabled
Jan 27 14:00:26 localhost kernel: Key type .fscrypt registered
Jan 27 14:00:26 localhost kernel: Key type fscrypt-provisioning registered
Jan 27 14:00:26 localhost kernel: Key type big_key registered
Jan 27 14:00:26 localhost kernel: Key type encrypted registered
Jan 27 14:00:26 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 27 14:00:26 localhost kernel: Loading compiled-in module X.509 certificates
Jan 27 14:00:26 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 27 14:00:26 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 27 14:00:26 localhost kernel: ima: No architecture policies found
Jan 27 14:00:26 localhost kernel: evm: Initialising EVM extended attributes:
Jan 27 14:00:26 localhost kernel: evm: security.selinux
Jan 27 14:00:26 localhost kernel: evm: security.SMACK64 (disabled)
Jan 27 14:00:26 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 27 14:00:26 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 27 14:00:26 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 27 14:00:26 localhost kernel: evm: security.apparmor (disabled)
Jan 27 14:00:26 localhost kernel: evm: security.ima
Jan 27 14:00:26 localhost kernel: evm: security.capability
Jan 27 14:00:26 localhost kernel: evm: HMAC attrs: 0x1
Jan 27 14:00:26 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 27 14:00:26 localhost kernel: Running certificate verification RSA selftest
Jan 27 14:00:26 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 27 14:00:26 localhost kernel: Running certificate verification ECDSA selftest
Jan 27 14:00:26 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 27 14:00:26 localhost kernel: clk: Disabling unused clocks
Jan 27 14:00:26 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 27 14:00:26 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 27 14:00:26 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 27 14:00:26 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 27 14:00:26 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 27 14:00:26 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 27 14:00:26 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 27 14:00:26 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 27 14:00:26 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 27 14:00:26 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 27 14:00:26 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 27 14:00:26 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 27 14:00:26 localhost kernel: Run /init as init process
Jan 27 14:00:26 localhost kernel:   with arguments:
Jan 27 14:00:26 localhost kernel:     /init
Jan 27 14:00:26 localhost kernel:   with environment:
Jan 27 14:00:26 localhost kernel:     HOME=/
Jan 27 14:00:26 localhost kernel:     TERM=linux
Jan 27 14:00:26 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 27 14:00:26 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 27 14:00:26 localhost systemd[1]: Detected virtualization kvm.
Jan 27 14:00:26 localhost systemd[1]: Detected architecture x86-64.
Jan 27 14:00:26 localhost systemd[1]: Running in initrd.
Jan 27 14:00:26 localhost systemd[1]: No hostname configured, using default hostname.
Jan 27 14:00:26 localhost systemd[1]: Hostname set to <localhost>.
Jan 27 14:00:26 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 27 14:00:26 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 27 14:00:26 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 27 14:00:26 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 27 14:00:26 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 27 14:00:26 localhost systemd[1]: Reached target Local File Systems.
Jan 27 14:00:26 localhost systemd[1]: Reached target Path Units.
Jan 27 14:00:26 localhost systemd[1]: Reached target Slice Units.
Jan 27 14:00:26 localhost systemd[1]: Reached target Swaps.
Jan 27 14:00:26 localhost systemd[1]: Reached target Timer Units.
Jan 27 14:00:26 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 27 14:00:26 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 27 14:00:26 localhost systemd[1]: Listening on Journal Socket.
Jan 27 14:00:26 localhost systemd[1]: Listening on udev Control Socket.
Jan 27 14:00:26 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 27 14:00:26 localhost systemd[1]: Reached target Socket Units.
Jan 27 14:00:26 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 27 14:00:26 localhost systemd[1]: Starting Journal Service...
Jan 27 14:00:26 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 27 14:00:26 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 27 14:00:26 localhost systemd[1]: Starting Create System Users...
Jan 27 14:00:26 localhost systemd[1]: Starting Setup Virtual Console...
Jan 27 14:00:26 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 27 14:00:26 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 27 14:00:26 localhost systemd-journald[305]: Journal started
Jan 27 14:00:26 localhost systemd-journald[305]: Runtime Journal (/run/log/journal/72809274cad74f439f0853d26ac912a7) is 8.0M, max 153.6M, 145.6M free.
Jan 27 14:00:26 localhost systemd-sysusers[309]: Creating group 'users' with GID 100.
Jan 27 14:00:26 localhost systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Jan 27 14:00:26 localhost systemd[1]: Started Journal Service.
Jan 27 14:00:26 localhost systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 27 14:00:26 localhost systemd[1]: Finished Create System Users.
Jan 27 14:00:26 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 27 14:00:26 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 27 14:00:26 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 27 14:00:26 localhost systemd[1]: Finished Setup Virtual Console.
Jan 27 14:00:26 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 27 14:00:26 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 27 14:00:26 localhost systemd[1]: Starting dracut cmdline hook...
Jan 27 14:00:26 localhost dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Jan 27 14:00:26 localhost dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 14:00:26 localhost systemd[1]: Finished dracut cmdline hook.
Jan 27 14:00:26 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 27 14:00:26 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 27 14:00:26 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 27 14:00:26 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 27 14:00:26 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 27 14:00:26 localhost kernel: RPC: Registered udp transport module.
Jan 27 14:00:26 localhost kernel: RPC: Registered tcp transport module.
Jan 27 14:00:26 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 27 14:00:26 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 27 14:00:26 localhost rpc.statd[440]: Version 2.5.4 starting
Jan 27 14:00:26 localhost rpc.statd[440]: Initializing NSM state
Jan 27 14:00:26 localhost rpc.idmapd[445]: Setting log level to 0
Jan 27 14:00:26 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 27 14:00:26 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 27 14:00:26 localhost systemd-udevd[458]: Using default interface naming scheme 'rhel-9.0'.
Jan 27 14:00:26 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 27 14:00:26 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 27 14:00:26 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 27 14:00:26 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 27 14:00:27 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 27 14:00:27 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 27 14:00:27 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 27 14:00:27 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 27 14:00:27 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 27 14:00:27 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 27 14:00:27 localhost systemd[1]: Reached target Network.
Jan 27 14:00:27 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 27 14:00:27 localhost systemd[1]: Starting dracut initqueue hook...
Jan 27 14:00:27 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 27 14:00:27 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 27 14:00:27 localhost kernel: libata version 3.00 loaded.
Jan 27 14:00:27 localhost kernel:  vda: vda1
Jan 27 14:00:27 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 27 14:00:27 localhost kernel: scsi host0: ata_piix
Jan 27 14:00:27 localhost kernel: scsi host1: ata_piix
Jan 27 14:00:27 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 27 14:00:27 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 27 14:00:27 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 27 14:00:27 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 27 14:00:27 localhost systemd[1]: Reached target System Initialization.
Jan 27 14:00:27 localhost systemd[1]: Reached target Basic System.
Jan 27 14:00:27 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 27 14:00:27 localhost kernel: ata1: found unknown device (class 0)
Jan 27 14:00:27 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 27 14:00:27 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 27 14:00:27 localhost systemd-udevd[462]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:00:27 localhost systemd[1]: Reached target Initrd Root Device.
Jan 27 14:00:27 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 27 14:00:27 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 27 14:00:27 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 27 14:00:27 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 27 14:00:27 localhost systemd[1]: Finished dracut initqueue hook.
Jan 27 14:00:27 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 27 14:00:27 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 27 14:00:27 localhost systemd[1]: Reached target Remote File Systems.
Jan 27 14:00:27 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 27 14:00:27 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 27 14:00:27 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 27 14:00:27 localhost systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Jan 27 14:00:27 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 27 14:00:27 localhost systemd[1]: Mounting /sysroot...
Jan 27 14:00:28 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 27 14:00:28 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 27 14:00:28 localhost kernel: XFS (vda1): Ending clean mount
Jan 27 14:00:29 localhost systemd[1]: Mounted /sysroot.
Jan 27 14:00:29 localhost systemd[1]: Reached target Initrd Root File System.
Jan 27 14:00:29 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 27 14:00:29 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 27 14:00:29 localhost systemd[1]: Reached target Initrd File Systems.
Jan 27 14:00:29 localhost systemd[1]: Reached target Initrd Default Target.
Jan 27 14:00:29 localhost systemd[1]: Starting dracut mount hook...
Jan 27 14:00:29 localhost systemd[1]: Finished dracut mount hook.
Jan 27 14:00:29 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 27 14:00:29 localhost rpc.idmapd[445]: exiting on signal 15
Jan 27 14:00:29 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 27 14:00:29 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 27 14:00:29 localhost systemd[1]: Stopped target Network.
Jan 27 14:00:29 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 27 14:00:29 localhost systemd[1]: Stopped target Timer Units.
Jan 27 14:00:29 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 27 14:00:29 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 27 14:00:29 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 27 14:00:29 localhost systemd[1]: Stopped target Basic System.
Jan 27 14:00:29 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 27 14:00:29 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 27 14:00:29 localhost systemd[1]: Stopped target Path Units.
Jan 27 14:00:29 localhost systemd[1]: Stopped target Remote File Systems.
Jan 27 14:00:29 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 27 14:00:29 localhost systemd[1]: Stopped target Slice Units.
Jan 27 14:00:29 localhost systemd[1]: Stopped target Socket Units.
Jan 27 14:00:29 localhost systemd[1]: Stopped target System Initialization.
Jan 27 14:00:29 localhost systemd[1]: Stopped target Local File Systems.
Jan 27 14:00:29 localhost systemd[1]: Stopped target Swaps.
Jan 27 14:00:29 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Stopped dracut mount hook.
Jan 27 14:00:29 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 27 14:00:29 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 27 14:00:29 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 27 14:00:29 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 27 14:00:29 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 27 14:00:29 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 27 14:00:29 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 27 14:00:29 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 27 14:00:29 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 27 14:00:29 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 27 14:00:29 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 27 14:00:29 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 27 14:00:29 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Closed udev Control Socket.
Jan 27 14:00:29 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Closed udev Kernel Socket.
Jan 27 14:00:29 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 27 14:00:29 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 27 14:00:29 localhost systemd[1]: Starting Cleanup udev Database...
Jan 27 14:00:29 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 27 14:00:29 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 27 14:00:29 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Stopped Create System Users.
Jan 27 14:00:29 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 27 14:00:29 localhost systemd[1]: Finished Cleanup udev Database.
Jan 27 14:00:29 localhost systemd[1]: Reached target Switch Root.
Jan 27 14:00:29 localhost systemd[1]: Starting Switch Root...
Jan 27 14:00:29 localhost systemd[1]: Switching root.
Jan 27 14:00:29 localhost systemd-journald[305]: Journal stopped
Jan 27 14:00:31 localhost systemd-journald[305]: Received SIGTERM from PID 1 (systemd).
Jan 27 14:00:31 localhost kernel: audit: type=1404 audit(1769522429.926:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 27 14:00:31 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 14:00:31 localhost kernel: SELinux:  policy capability open_perms=1
Jan 27 14:00:31 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 14:00:31 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 27 14:00:31 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 14:00:31 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 14:00:31 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 14:00:31 localhost kernel: audit: type=1403 audit(1769522430.173:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 27 14:00:31 localhost systemd[1]: Successfully loaded SELinux policy in 256.964ms.
Jan 27 14:00:31 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 38.343ms.
Jan 27 14:00:31 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 27 14:00:31 localhost systemd[1]: Detected virtualization kvm.
Jan 27 14:00:31 localhost systemd[1]: Detected architecture x86-64.
Jan 27 14:00:31 localhost systemd-rc-local-generator[635]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:00:31 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 27 14:00:31 localhost systemd[1]: Stopped Switch Root.
Jan 27 14:00:31 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 27 14:00:31 localhost systemd[1]: Created slice Slice /system/getty.
Jan 27 14:00:31 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 27 14:00:31 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 27 14:00:31 localhost systemd[1]: Created slice User and Session Slice.
Jan 27 14:00:31 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 27 14:00:31 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 27 14:00:31 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 27 14:00:31 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 27 14:00:31 localhost systemd[1]: Stopped target Switch Root.
Jan 27 14:00:31 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 27 14:00:31 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 27 14:00:31 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 27 14:00:31 localhost systemd[1]: Reached target Path Units.
Jan 27 14:00:31 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 27 14:00:31 localhost systemd[1]: Reached target Slice Units.
Jan 27 14:00:31 localhost systemd[1]: Reached target Swaps.
Jan 27 14:00:31 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 27 14:00:31 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 27 14:00:31 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 27 14:00:31 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 27 14:00:31 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 27 14:00:31 localhost systemd[1]: Listening on udev Control Socket.
Jan 27 14:00:31 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 27 14:00:31 localhost systemd[1]: Mounting Huge Pages File System...
Jan 27 14:00:31 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 27 14:00:31 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 27 14:00:31 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 27 14:00:31 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 27 14:00:31 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 27 14:00:31 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 27 14:00:31 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 27 14:00:31 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 27 14:00:31 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 27 14:00:31 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 27 14:00:31 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 27 14:00:31 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 27 14:00:31 localhost systemd[1]: Stopped Journal Service.
Jan 27 14:00:31 localhost systemd[1]: Starting Journal Service...
Jan 27 14:00:31 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 27 14:00:31 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 27 14:00:31 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 27 14:00:31 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 27 14:00:31 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 27 14:00:31 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 27 14:00:31 localhost kernel: fuse: init (API version 7.37)
Jan 27 14:00:31 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 27 14:00:31 localhost systemd[1]: Mounted Huge Pages File System.
Jan 27 14:00:31 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 27 14:00:31 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 27 14:00:31 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 27 14:00:31 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 27 14:00:31 localhost systemd-journald[677]: Journal started
Jan 27 14:00:31 localhost systemd-journald[677]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 27 14:00:31 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 27 14:00:31 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 27 14:00:31 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 27 14:00:31 localhost systemd[1]: Started Journal Service.
Jan 27 14:00:31 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 27 14:00:31 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 27 14:00:31 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 27 14:00:31 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 27 14:00:31 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 27 14:00:31 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 27 14:00:31 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 27 14:00:31 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 27 14:00:31 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 27 14:00:31 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 27 14:00:31 localhost kernel: ACPI: bus type drm_connector registered
Jan 27 14:00:31 localhost systemd[1]: Mounting FUSE Control File System...
Jan 27 14:00:31 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 27 14:00:31 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 27 14:00:31 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 27 14:00:31 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 27 14:00:31 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 27 14:00:31 localhost systemd[1]: Starting Create System Users...
Jan 27 14:00:31 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 27 14:00:31 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 27 14:00:31 localhost systemd[1]: Mounted FUSE Control File System.
Jan 27 14:00:31 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 27 14:00:31 localhost systemd-journald[677]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 27 14:00:31 localhost systemd-journald[677]: Received client request to flush runtime journal.
Jan 27 14:00:31 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 27 14:00:31 localhost systemd[1]: Finished Create System Users.
Jan 27 14:00:31 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 27 14:00:31 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 27 14:00:31 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 27 14:00:32 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 27 14:00:32 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 27 14:00:32 localhost systemd[1]: Reached target Local File Systems.
Jan 27 14:00:32 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 27 14:00:32 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 27 14:00:32 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 27 14:00:32 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 27 14:00:32 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 27 14:00:32 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 27 14:00:32 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 27 14:00:32 localhost bootctl[695]: Couldn't find EFI system partition, skipping.
Jan 27 14:00:32 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 27 14:00:32 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 27 14:00:32 localhost systemd[1]: Starting Security Auditing Service...
Jan 27 14:00:32 localhost systemd[1]: Starting RPC Bind...
Jan 27 14:00:32 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 27 14:00:32 localhost auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 27 14:00:32 localhost auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 27 14:00:32 localhost systemd[1]: Started RPC Bind.
Jan 27 14:00:32 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 27 14:00:32 localhost augenrules[706]: /sbin/augenrules: No change
Jan 27 14:00:32 localhost augenrules[722]: No rules
Jan 27 14:00:32 localhost augenrules[722]: enabled 1
Jan 27 14:00:32 localhost augenrules[722]: failure 1
Jan 27 14:00:32 localhost augenrules[722]: pid 701
Jan 27 14:00:32 localhost augenrules[722]: rate_limit 0
Jan 27 14:00:32 localhost augenrules[722]: backlog_limit 8192
Jan 27 14:00:32 localhost augenrules[722]: lost 0
Jan 27 14:00:32 localhost augenrules[722]: backlog 4
Jan 27 14:00:32 localhost augenrules[722]: backlog_wait_time 60000
Jan 27 14:00:32 localhost augenrules[722]: backlog_wait_time_actual 0
Jan 27 14:00:32 localhost augenrules[722]: enabled 1
Jan 27 14:00:32 localhost augenrules[722]: failure 1
Jan 27 14:00:32 localhost augenrules[722]: pid 701
Jan 27 14:00:32 localhost augenrules[722]: rate_limit 0
Jan 27 14:00:32 localhost augenrules[722]: backlog_limit 8192
Jan 27 14:00:32 localhost augenrules[722]: lost 0
Jan 27 14:00:32 localhost augenrules[722]: backlog 0
Jan 27 14:00:32 localhost augenrules[722]: backlog_wait_time 60000
Jan 27 14:00:32 localhost augenrules[722]: backlog_wait_time_actual 0
Jan 27 14:00:32 localhost augenrules[722]: enabled 1
Jan 27 14:00:32 localhost augenrules[722]: failure 1
Jan 27 14:00:32 localhost augenrules[722]: pid 701
Jan 27 14:00:32 localhost augenrules[722]: rate_limit 0
Jan 27 14:00:32 localhost augenrules[722]: backlog_limit 8192
Jan 27 14:00:32 localhost augenrules[722]: lost 0
Jan 27 14:00:32 localhost augenrules[722]: backlog 0
Jan 27 14:00:32 localhost augenrules[722]: backlog_wait_time 60000
Jan 27 14:00:32 localhost augenrules[722]: backlog_wait_time_actual 0
Jan 27 14:00:32 localhost systemd[1]: Started Security Auditing Service.
Jan 27 14:00:32 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 27 14:00:32 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 27 14:00:33 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 27 14:00:33 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 27 14:00:33 localhost systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Jan 27 14:00:33 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 27 14:00:33 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 27 14:00:33 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 27 14:00:33 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 27 14:00:33 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 27 14:00:33 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 27 14:00:33 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 27 14:00:33 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 27 14:00:33 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 27 14:00:33 localhost systemd-udevd[762]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:00:34 localhost kernel: kvm_amd: TSC scaling supported
Jan 27 14:00:34 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 27 14:00:34 localhost kernel: kvm_amd: Nested Paging enabled
Jan 27 14:00:34 localhost kernel: kvm_amd: LBR virtualization supported
Jan 27 14:00:34 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 27 14:00:34 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 27 14:00:34 localhost kernel: Console: switching to colour dummy device 80x25
Jan 27 14:00:34 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 27 14:00:34 localhost kernel: [drm] features: -context_init
Jan 27 14:00:34 localhost kernel: [drm] number of scanouts: 1
Jan 27 14:00:34 localhost kernel: [drm] number of cap sets: 0
Jan 27 14:00:34 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 27 14:00:34 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 27 14:00:34 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 27 14:00:34 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 27 14:00:34 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 27 14:00:34 localhost systemd[1]: Starting Update is Completed...
Jan 27 14:00:34 localhost systemd[1]: Finished Update is Completed.
Jan 27 14:00:34 localhost systemd[1]: Reached target System Initialization.
Jan 27 14:00:34 localhost systemd[1]: Started dnf makecache --timer.
Jan 27 14:00:34 localhost systemd[1]: Started Daily rotation of log files.
Jan 27 14:00:34 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 27 14:00:34 localhost systemd[1]: Reached target Timer Units.
Jan 27 14:00:34 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 27 14:00:34 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 27 14:00:34 localhost systemd[1]: Reached target Socket Units.
Jan 27 14:00:34 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 27 14:00:34 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 27 14:00:34 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 27 14:00:34 localhost systemd[1]: Reached target Basic System.
Jan 27 14:00:34 localhost systemd[1]: Starting NTP client/server...
Jan 27 14:00:34 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 27 14:00:34 localhost dbus-broker-lau[810]: Ready
Jan 27 14:00:34 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 27 14:00:34 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 27 14:00:34 localhost systemd[1]: Started irqbalance daemon.
Jan 27 14:00:34 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 27 14:00:34 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 14:00:34 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 14:00:34 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 14:00:34 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 27 14:00:34 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 27 14:00:34 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 27 14:00:34 localhost systemd[1]: Starting User Login Management...
Jan 27 14:00:34 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 27 14:00:34 localhost systemd-logind[820]: New seat seat0.
Jan 27 14:00:34 localhost systemd-logind[820]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 27 14:00:34 localhost systemd-logind[820]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 27 14:00:34 localhost systemd[1]: Started User Login Management.
Jan 27 14:00:34 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 27 14:00:34 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 27 14:00:34 localhost chronyd[830]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 27 14:00:34 localhost chronyd[830]: Loaded 0 symmetric keys
Jan 27 14:00:34 localhost chronyd[830]: Using right/UTC timezone to obtain leap second data
Jan 27 14:00:34 localhost chronyd[830]: Loaded seccomp filter (level 2)
Jan 27 14:00:34 localhost systemd[1]: Started NTP client/server.
Jan 27 14:00:35 localhost iptables.init[815]: iptables: Applying firewall rules: [  OK  ]
Jan 27 14:00:35 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 27 14:00:36 localhost cloud-init[838]: Cloud-init v. 24.4-8.el9 running 'init-local' at Tue, 27 Jan 2026 14:00:36 +0000. Up 12.45 seconds.
Jan 27 14:00:36 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 27 14:00:36 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 27 14:00:36 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpw9v101ac.mount: Deactivated successfully.
Jan 27 14:00:37 localhost systemd[1]: Starting Hostname Service...
Jan 27 14:00:37 localhost systemd[1]: Started Hostname Service.
Jan 27 14:00:37 np0005597539.novalocal systemd-hostnamed[854]: Hostname set to <np0005597539.novalocal> (static)
Jan 27 14:00:37 np0005597539.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 27 14:00:37 np0005597539.novalocal systemd[1]: Reached target Preparation for Network.
Jan 27 14:00:37 np0005597539.novalocal systemd[1]: Starting Network Manager...
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.8080] NetworkManager (version 1.54.3-2.el9) is starting... (boot:3ec64c28-9072-4af9-bb4c-439f11a25520)
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.8086] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.8682] manager[0x55b9b77ce000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.8965] hostname: hostname: using hostnamed
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.8966] hostname: static hostname changed from (none) to "np0005597539.novalocal"
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.8971] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.9624] manager[0x55b9b77ce000]: rfkill: Wi-Fi hardware radio set enabled
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.9626] manager[0x55b9b77ce000]: rfkill: WWAN hardware radio set enabled
Jan 27 14:00:37 np0005597539.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.9713] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.9713] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.9714] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.9715] manager: Networking is enabled by state file
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.9717] settings: Loaded settings plugin: keyfile (internal)
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.9756] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.9782] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.9798] dhcp: init: Using DHCP client 'internal'
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.9802] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.9819] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:00:37 np0005597539.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.9963] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.9979] device (lo): Activation: starting connection 'lo' (6256a758-a13a-40c3-b045-d212ec55f25b)
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.9989] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 27 14:00:37 np0005597539.novalocal NetworkManager[858]: <info>  [1769522437.9991] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 14:00:38 np0005597539.novalocal systemd[1]: Started Network Manager.
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0022] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0025] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0027] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0028] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0029] device (eth0): carrier: link connected
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0030] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0035] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0040] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 14:00:38 np0005597539.novalocal systemd[1]: Reached target Network.
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0043] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0044] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0046] manager: NetworkManager state is now CONNECTING
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0047] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0056] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0058] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 14:00:38 np0005597539.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0103] dhcp4 (eth0): state changed new lease, address=38.129.56.182
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0110] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 27 14:00:38 np0005597539.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0130] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 14:00:38 np0005597539.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0230] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0233] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0241] device (lo): Activation: successful, device activated.
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0260] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0264] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0273] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0278] device (eth0): Activation: successful, device activated.
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0286] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 27 14:00:38 np0005597539.novalocal NetworkManager[858]: <info>  [1769522438.0292] manager: startup complete
Jan 27 14:00:38 np0005597539.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 27 14:00:38 np0005597539.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 27 14:00:38 np0005597539.novalocal systemd[1]: Reached target NFS client services.
Jan 27 14:00:38 np0005597539.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 27 14:00:38 np0005597539.novalocal systemd[1]: Reached target Remote File Systems.
Jan 27 14:00:38 np0005597539.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 27 14:00:38 np0005597539.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 27 14:00:38 np0005597539.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: Cloud-init v. 24.4-8.el9 running 'init' at Tue, 27 Jan 2026 14:00:38 +0000. Up 14.47 seconds.
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: |  eth0  | True |        38.129.56.182         | 255.255.255.0 | global | fa:16:3e:ec:88:9c |
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:feec:889c/64 |       .       |  link  | fa:16:3e:ec:88:9c |
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 27 14:00:38 np0005597539.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 27 14:00:39 np0005597539.novalocal useradd[989]: new group: name=cloud-user, GID=1001
Jan 27 14:00:39 np0005597539.novalocal useradd[989]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 27 14:00:39 np0005597539.novalocal useradd[989]: add 'cloud-user' to group 'adm'
Jan 27 14:00:39 np0005597539.novalocal useradd[989]: add 'cloud-user' to group 'systemd-journal'
Jan 27 14:00:39 np0005597539.novalocal useradd[989]: add 'cloud-user' to shadow group 'adm'
Jan 27 14:00:39 np0005597539.novalocal useradd[989]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: Generating public/private rsa key pair.
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: The key fingerprint is:
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: SHA256:GseX0PVKdBkN7X1iRdTue9BgxiUES1gFyFIImF1dQuA root@np0005597539.novalocal
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: The key's randomart image is:
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: +---[RSA 3072]----+
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |     +.ooB+=O=*O+|
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |    o ..o.== +o *|
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |        E.. o..*.|
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |       . . o .B =|
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |      . S o .+ =.|
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |       + .    . o|
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |      .        ..|
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |               ..|
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |                .|
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: +----[SHA256]-----+
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: Generating public/private ecdsa key pair.
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: The key fingerprint is:
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: SHA256:fLGJYuG1KmDTjFsALUtLOGs6HVpAkF7fPIihRFRtW8o root@np0005597539.novalocal
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: The key's randomart image is:
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: +---[ECDSA 256]---+
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |O*...            |
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |*=.o o .         |
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |=*= * O . .      |
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |o=o* E B o +     |
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |o+=.+ + S +      |
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |+..= . o .       |
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: | .. . .          |
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |     .           |
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |                 |
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: +----[SHA256]-----+
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: Generating public/private ed25519 key pair.
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: The key fingerprint is:
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: SHA256:AwZg0Vn135ND+1QyecLJ6hsHRorzqnilbitg2tNvIeg root@np0005597539.novalocal
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: The key's randomart image is:
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: +--[ED25519 256]--+
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |  ++.o...        |
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: | .  o.   .   o o |
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |      o   . . X o|
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |     . . . + + B.|
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |   .    S . = * .|
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |  + . . .+ o . = |
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: | = o . +  . o . .|
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |. E o.=  .   +   |
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: |   ..B=o.   .    |
Jan 27 14:00:40 np0005597539.novalocal cloud-init[923]: +----[SHA256]-----+
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Reached target Network is Online.
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Starting System Logging Service...
Jan 27 14:00:40 np0005597539.novalocal sm-notify[1005]: Version 2.5.4 starting
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Starting Permit User Sessions...
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 27 14:00:40 np0005597539.novalocal sshd[1007]: Server listening on 0.0.0.0 port 22.
Jan 27 14:00:40 np0005597539.novalocal sshd[1007]: Server listening on :: port 22.
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Finished Permit User Sessions.
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Started Command Scheduler.
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Started Getty on tty1.
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Reached target Login Prompts.
Jan 27 14:00:40 np0005597539.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Jan 27 14:00:40 np0005597539.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 27 14:00:40 np0005597539.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 86% if used.)
Jan 27 14:00:40 np0005597539.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Jan 27 14:00:40 np0005597539.novalocal rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Jan 27 14:00:40 np0005597539.novalocal rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Started System Logging Service.
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Reached target Multi-User System.
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 27 14:00:40 np0005597539.novalocal rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:00:40 np0005597539.novalocal kdumpctl[1019]: kdump: No kdump initial ramdisk found.
Jan 27 14:00:40 np0005597539.novalocal kdumpctl[1019]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 27 14:00:40 np0005597539.novalocal cloud-init[1108]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Tue, 27 Jan 2026 14:00:40 +0000. Up 16.34 seconds.
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 27 14:00:40 np0005597539.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 27 14:00:40 np0005597539.novalocal dracut[1266]: dracut-057-102.git20250818.el9
Jan 27 14:00:41 np0005597539.novalocal cloud-init[1284]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Tue, 27 Jan 2026 14:00:41 +0000. Up 16.80 seconds.
Jan 27 14:00:41 np0005597539.novalocal cloud-init[1288]: #############################################################
Jan 27 14:00:41 np0005597539.novalocal cloud-init[1292]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 27 14:00:41 np0005597539.novalocal cloud-init[1299]: 256 SHA256:fLGJYuG1KmDTjFsALUtLOGs6HVpAkF7fPIihRFRtW8o root@np0005597539.novalocal (ECDSA)
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 27 14:00:41 np0005597539.novalocal cloud-init[1305]: 256 SHA256:AwZg0Vn135ND+1QyecLJ6hsHRorzqnilbitg2tNvIeg root@np0005597539.novalocal (ED25519)
Jan 27 14:00:41 np0005597539.novalocal cloud-init[1310]: 3072 SHA256:GseX0PVKdBkN7X1iRdTue9BgxiUES1gFyFIImF1dQuA root@np0005597539.novalocal (RSA)
Jan 27 14:00:41 np0005597539.novalocal cloud-init[1313]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 27 14:00:41 np0005597539.novalocal cloud-init[1315]: #############################################################
Jan 27 14:00:41 np0005597539.novalocal cloud-init[1284]: Cloud-init v. 24.4-8.el9 finished at Tue, 27 Jan 2026 14:00:41 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 16.99 seconds
Jan 27 14:00:41 np0005597539.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 27 14:00:41 np0005597539.novalocal systemd[1]: Reached target Cloud-init target.
Jan 27 14:00:41 np0005597539.novalocal sshd-session[1384]: Unable to negotiate with 38.102.83.114 port 48052: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 27 14:00:41 np0005597539.novalocal sshd-session[1388]: Connection reset by 38.102.83.114 port 48064 [preauth]
Jan 27 14:00:41 np0005597539.novalocal sshd-session[1393]: Unable to negotiate with 38.102.83.114 port 48074: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 27 14:00:41 np0005597539.novalocal sshd-session[1398]: Unable to negotiate with 38.102.83.114 port 48076: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 27 14:00:41 np0005597539.novalocal sshd-session[1403]: Connection closed by 38.102.83.114 port 48088 [preauth]
Jan 27 14:00:41 np0005597539.novalocal sshd-session[1375]: Connection closed by 38.102.83.114 port 48038 [preauth]
Jan 27 14:00:41 np0005597539.novalocal sshd-session[1408]: Connection closed by 38.102.83.114 port 48100 [preauth]
Jan 27 14:00:41 np0005597539.novalocal sshd-session[1413]: Unable to negotiate with 38.102.83.114 port 48106: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 27 14:00:41 np0005597539.novalocal sshd-session[1418]: Unable to negotiate with 38.102.83.114 port 48114: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 27 14:00:41 np0005597539.novalocal dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: memstrack is not available
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: memstrack is not available
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 27 14:00:42 np0005597539.novalocal dracut[1269]: *** Including module: systemd ***
Jan 27 14:00:43 np0005597539.novalocal chronyd[830]: Selected source 216.128.178.20 (2.centos.pool.ntp.org)
Jan 27 14:00:43 np0005597539.novalocal chronyd[830]: System clock TAI offset set to 37 seconds
Jan 27 14:00:43 np0005597539.novalocal dracut[1269]: *** Including module: fips ***
Jan 27 14:00:43 np0005597539.novalocal dracut[1269]: *** Including module: systemd-initrd ***
Jan 27 14:00:43 np0005597539.novalocal dracut[1269]: *** Including module: i18n ***
Jan 27 14:00:43 np0005597539.novalocal dracut[1269]: *** Including module: drm ***
Jan 27 14:00:44 np0005597539.novalocal dracut[1269]: *** Including module: prefixdevname ***
Jan 27 14:00:44 np0005597539.novalocal dracut[1269]: *** Including module: kernel-modules ***
Jan 27 14:00:44 np0005597539.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 27 14:00:44 np0005597539.novalocal dracut[1269]: *** Including module: kernel-modules-extra ***
Jan 27 14:00:44 np0005597539.novalocal dracut[1269]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 27 14:00:44 np0005597539.novalocal dracut[1269]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 27 14:00:44 np0005597539.novalocal dracut[1269]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 27 14:00:44 np0005597539.novalocal dracut[1269]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 27 14:00:45 np0005597539.novalocal dracut[1269]: *** Including module: qemu ***
Jan 27 14:00:45 np0005597539.novalocal dracut[1269]: *** Including module: fstab-sys ***
Jan 27 14:00:45 np0005597539.novalocal dracut[1269]: *** Including module: rootfs-block ***
Jan 27 14:00:45 np0005597539.novalocal dracut[1269]: *** Including module: terminfo ***
Jan 27 14:00:45 np0005597539.novalocal dracut[1269]: *** Including module: udev-rules ***
Jan 27 14:00:45 np0005597539.novalocal irqbalance[816]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 27 14:00:45 np0005597539.novalocal irqbalance[816]: IRQ 25 affinity is now unmanaged
Jan 27 14:00:45 np0005597539.novalocal irqbalance[816]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 27 14:00:45 np0005597539.novalocal irqbalance[816]: IRQ 31 affinity is now unmanaged
Jan 27 14:00:45 np0005597539.novalocal irqbalance[816]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 27 14:00:45 np0005597539.novalocal irqbalance[816]: IRQ 28 affinity is now unmanaged
Jan 27 14:00:45 np0005597539.novalocal irqbalance[816]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 27 14:00:45 np0005597539.novalocal irqbalance[816]: IRQ 32 affinity is now unmanaged
Jan 27 14:00:45 np0005597539.novalocal irqbalance[816]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 27 14:00:45 np0005597539.novalocal irqbalance[816]: IRQ 30 affinity is now unmanaged
Jan 27 14:00:45 np0005597539.novalocal irqbalance[816]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 27 14:00:45 np0005597539.novalocal irqbalance[816]: IRQ 29 affinity is now unmanaged
Jan 27 14:00:45 np0005597539.novalocal dracut[1269]: Skipping udev rule: 91-permissions.rules
Jan 27 14:00:45 np0005597539.novalocal dracut[1269]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 27 14:00:45 np0005597539.novalocal dracut[1269]: *** Including module: virtiofs ***
Jan 27 14:00:45 np0005597539.novalocal dracut[1269]: *** Including module: dracut-systemd ***
Jan 27 14:00:45 np0005597539.novalocal dracut[1269]: *** Including module: usrmount ***
Jan 27 14:00:45 np0005597539.novalocal dracut[1269]: *** Including module: base ***
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]: *** Including module: fs-lib ***
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]: *** Including module: kdumpbase ***
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:   microcode_ctl module: mangling fw_dir
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: configuration "intel" is ignored
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 27 14:00:46 np0005597539.novalocal dracut[1269]: *** Including module: openssl ***
Jan 27 14:00:47 np0005597539.novalocal dracut[1269]: *** Including module: shutdown ***
Jan 27 14:00:47 np0005597539.novalocal dracut[1269]: *** Including module: squash ***
Jan 27 14:00:47 np0005597539.novalocal dracut[1269]: *** Including modules done ***
Jan 27 14:00:47 np0005597539.novalocal dracut[1269]: *** Installing kernel module dependencies ***
Jan 27 14:00:48 np0005597539.novalocal dracut[1269]: *** Installing kernel module dependencies done ***
Jan 27 14:00:48 np0005597539.novalocal dracut[1269]: *** Resolving executable dependencies ***
Jan 27 14:00:48 np0005597539.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 14:00:49 np0005597539.novalocal dracut[1269]: *** Resolving executable dependencies done ***
Jan 27 14:00:49 np0005597539.novalocal dracut[1269]: *** Generating early-microcode cpio image ***
Jan 27 14:00:49 np0005597539.novalocal dracut[1269]: *** Store current command line parameters ***
Jan 27 14:00:49 np0005597539.novalocal dracut[1269]: Stored kernel commandline:
Jan 27 14:00:49 np0005597539.novalocal dracut[1269]: No dracut internal kernel commandline stored in the initramfs
Jan 27 14:00:49 np0005597539.novalocal dracut[1269]: *** Install squash loader ***
Jan 27 14:00:51 np0005597539.novalocal dracut[1269]: *** Squashing the files inside the initramfs ***
Jan 27 14:00:52 np0005597539.novalocal dracut[1269]: *** Squashing the files inside the initramfs done ***
Jan 27 14:00:52 np0005597539.novalocal dracut[1269]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 27 14:00:52 np0005597539.novalocal dracut[1269]: *** Hardlinking files ***
Jan 27 14:00:52 np0005597539.novalocal dracut[1269]: Mode:           real
Jan 27 14:00:52 np0005597539.novalocal dracut[1269]: Files:          50
Jan 27 14:00:52 np0005597539.novalocal dracut[1269]: Linked:         0 files
Jan 27 14:00:52 np0005597539.novalocal dracut[1269]: Compared:       0 xattrs
Jan 27 14:00:52 np0005597539.novalocal dracut[1269]: Compared:       0 files
Jan 27 14:00:52 np0005597539.novalocal dracut[1269]: Saved:          0 B
Jan 27 14:00:52 np0005597539.novalocal dracut[1269]: Duration:       0.000566 seconds
Jan 27 14:00:52 np0005597539.novalocal dracut[1269]: *** Hardlinking files done ***
Jan 27 14:00:54 np0005597539.novalocal dracut[1269]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 27 14:00:54 np0005597539.novalocal kdumpctl[1019]: kdump: kexec: loaded kdump kernel
Jan 27 14:00:54 np0005597539.novalocal kdumpctl[1019]: kdump: Starting kdump: [OK]
Jan 27 14:00:54 np0005597539.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 27 14:00:54 np0005597539.novalocal systemd[1]: Startup finished in 1.709s (kernel) + 3.989s (initrd) + 24.692s (userspace) = 30.391s.
Jan 27 14:00:59 np0005597539.novalocal sshd-session[4303]: Accepted publickey for zuul from 38.102.83.114 port 44038 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 27 14:00:59 np0005597539.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 27 14:00:59 np0005597539.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 27 14:00:59 np0005597539.novalocal systemd-logind[820]: New session 1 of user zuul.
Jan 27 14:00:59 np0005597539.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 27 14:00:59 np0005597539.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 27 14:00:59 np0005597539.novalocal systemd[4307]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:00:59 np0005597539.novalocal systemd[4307]: Queued start job for default target Main User Target.
Jan 27 14:00:59 np0005597539.novalocal systemd[4307]: Created slice User Application Slice.
Jan 27 14:00:59 np0005597539.novalocal systemd[4307]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 27 14:00:59 np0005597539.novalocal systemd[4307]: Started Daily Cleanup of User's Temporary Directories.
Jan 27 14:00:59 np0005597539.novalocal systemd[4307]: Reached target Paths.
Jan 27 14:00:59 np0005597539.novalocal systemd[4307]: Reached target Timers.
Jan 27 14:00:59 np0005597539.novalocal systemd[4307]: Starting D-Bus User Message Bus Socket...
Jan 27 14:00:59 np0005597539.novalocal systemd[4307]: Starting Create User's Volatile Files and Directories...
Jan 27 14:00:59 np0005597539.novalocal systemd[4307]: Finished Create User's Volatile Files and Directories.
Jan 27 14:00:59 np0005597539.novalocal systemd[4307]: Listening on D-Bus User Message Bus Socket.
Jan 27 14:00:59 np0005597539.novalocal systemd[4307]: Reached target Sockets.
Jan 27 14:00:59 np0005597539.novalocal systemd[4307]: Reached target Basic System.
Jan 27 14:00:59 np0005597539.novalocal systemd[4307]: Reached target Main User Target.
Jan 27 14:00:59 np0005597539.novalocal systemd[4307]: Startup finished in 156ms.
Jan 27 14:00:59 np0005597539.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 27 14:00:59 np0005597539.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 27 14:00:59 np0005597539.novalocal sshd-session[4303]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:01:00 np0005597539.novalocal python3[4389]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:01:01 np0005597539.novalocal CROND[4395]: (root) CMD (run-parts /etc/cron.hourly)
Jan 27 14:01:01 np0005597539.novalocal run-parts[4398]: (/etc/cron.hourly) starting 0anacron
Jan 27 14:01:01 np0005597539.novalocal anacron[4406]: Anacron started on 2026-01-27
Jan 27 14:01:01 np0005597539.novalocal anacron[4406]: Will run job `cron.daily' in 41 min.
Jan 27 14:01:01 np0005597539.novalocal anacron[4406]: Will run job `cron.weekly' in 61 min.
Jan 27 14:01:01 np0005597539.novalocal anacron[4406]: Will run job `cron.monthly' in 81 min.
Jan 27 14:01:01 np0005597539.novalocal anacron[4406]: Jobs will be executed sequentially
Jan 27 14:01:01 np0005597539.novalocal run-parts[4408]: (/etc/cron.hourly) finished 0anacron
Jan 27 14:01:01 np0005597539.novalocal CROND[4394]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 27 14:01:07 np0005597539.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 14:01:11 np0005597539.novalocal python3[4434]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:01:17 np0005597539.novalocal python3[4492]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:01:17 np0005597539.novalocal python3[4532]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 27 14:01:19 np0005597539.novalocal python3[4558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDqlIrTUS90cVPdGcjtjcjOdTNnuNSH44aWkcfGoABrzQSw2o1nZz40v7CzeI6sNi0+/3Jqvb5/B/JF0iWdKOJYPHY4/zzS5/dx0vvXmYgSAJKDsFoB9fZZznrT79vG+3Yu05xrY9Aa9Q0rMYstGT8u2SMiXGOPSak6PSXeaRqHThP7v0gwe00KUWJOig8esGOZ0ZNK0BTGgLHTp3DgtlPfJ0KHCMdtmHL0OQz+D07MZgDF2DkOf4ONdZWjjzUAD19iHwVUzA6Am36c55ZHtIfAZsQ3lnxFalXW8JVOdHiRAA9DwdgiNOKmNCy4DWDpTKTMdI2SYhJlvGcqMfK84CWsD+FkvXsovT1TIPjOyjM/AQ0AYsG1sHCilkrWEE374tbP7TcA1cgl3cpTTD52QGwcglmKqFg35K64n0R2abex2bwzpKxnLxfk9QjHOuOpGJDgxfPwkVGZLnz0mAB/DNWFQK/875UCpxc1uOJJimdSBvilB+NuZQQq2b6RS6qHmqc= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:19 np0005597539.novalocal python3[4582]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:20 np0005597539.novalocal python3[4681]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:01:20 np0005597539.novalocal python3[4752]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769522480.1523833-207-109634585944121/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=787bda770ad64765aa2b8c8ccc012bb8_id_rsa follow=False checksum=b982bddae37288e8ce0e2429dd9247b4afba0590 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:21 np0005597539.novalocal python3[4875]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:01:21 np0005597539.novalocal python3[4946]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769522481.0705464-240-166442607423064/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=787bda770ad64765aa2b8c8ccc012bb8_id_rsa.pub follow=False checksum=71a21d863c69e1f610e2761f6baab355af6f1211 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:22 np0005597539.novalocal python3[4994]: ansible-ping Invoked with data=pong
Jan 27 14:01:23 np0005597539.novalocal python3[5018]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:01:25 np0005597539.novalocal python3[5076]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 27 14:01:26 np0005597539.novalocal python3[5108]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:26 np0005597539.novalocal python3[5132]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:27 np0005597539.novalocal python3[5156]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:27 np0005597539.novalocal python3[5180]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:27 np0005597539.novalocal python3[5204]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:27 np0005597539.novalocal python3[5228]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:29 np0005597539.novalocal sudo[5252]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhbemxietmjrzhyrafiqqhfptkznhygv ; /usr/bin/python3'
Jan 27 14:01:29 np0005597539.novalocal sudo[5252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:01:29 np0005597539.novalocal python3[5254]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:29 np0005597539.novalocal sudo[5252]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:29 np0005597539.novalocal sudo[5330]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdzezgxinxivyhemexcblctpozzhbocq ; /usr/bin/python3'
Jan 27 14:01:29 np0005597539.novalocal sudo[5330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:01:29 np0005597539.novalocal python3[5332]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:01:29 np0005597539.novalocal sudo[5330]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:30 np0005597539.novalocal sudo[5403]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpycfjariqqgulzcsxszsicqllxjxwtf ; /usr/bin/python3'
Jan 27 14:01:30 np0005597539.novalocal sudo[5403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:01:30 np0005597539.novalocal python3[5405]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769522489.5588303-21-191391836053861/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:30 np0005597539.novalocal sudo[5403]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:31 np0005597539.novalocal python3[5453]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:31 np0005597539.novalocal python3[5477]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:31 np0005597539.novalocal python3[5501]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:31 np0005597539.novalocal python3[5525]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:32 np0005597539.novalocal python3[5549]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:32 np0005597539.novalocal python3[5573]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:32 np0005597539.novalocal python3[5597]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:33 np0005597539.novalocal python3[5621]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:33 np0005597539.novalocal python3[5645]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:33 np0005597539.novalocal python3[5669]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:34 np0005597539.novalocal python3[5693]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:34 np0005597539.novalocal python3[5717]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:34 np0005597539.novalocal python3[5741]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:34 np0005597539.novalocal python3[5765]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:35 np0005597539.novalocal python3[5789]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:35 np0005597539.novalocal irqbalance[816]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 27 14:01:35 np0005597539.novalocal irqbalance[816]: IRQ 26 affinity is now unmanaged
Jan 27 14:01:35 np0005597539.novalocal python3[5813]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:35 np0005597539.novalocal python3[5837]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:36 np0005597539.novalocal python3[5861]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:36 np0005597539.novalocal python3[5885]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:36 np0005597539.novalocal python3[5909]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:36 np0005597539.novalocal python3[5933]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:37 np0005597539.novalocal python3[5957]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:37 np0005597539.novalocal python3[5981]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:37 np0005597539.novalocal python3[6005]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:38 np0005597539.novalocal python3[6029]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:38 np0005597539.novalocal python3[6053]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:01:40 np0005597539.novalocal sudo[6077]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xovpixaiewziorelbfodjjwucypdegci ; /usr/bin/python3'
Jan 27 14:01:40 np0005597539.novalocal sudo[6077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:01:40 np0005597539.novalocal python3[6079]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 27 14:01:40 np0005597539.novalocal systemd[1]: Starting Time & Date Service...
Jan 27 14:01:41 np0005597539.novalocal systemd[1]: Started Time & Date Service.
Jan 27 14:01:41 np0005597539.novalocal systemd-timedated[6081]: Changed time zone to 'UTC' (UTC).
Jan 27 14:01:41 np0005597539.novalocal sudo[6077]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:42 np0005597539.novalocal sudo[6108]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uudtoybtuhnqgoacmkadhkrwlwdtunih ; /usr/bin/python3'
Jan 27 14:01:42 np0005597539.novalocal sudo[6108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:01:42 np0005597539.novalocal python3[6110]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:42 np0005597539.novalocal sudo[6108]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:42 np0005597539.novalocal python3[6186]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:01:43 np0005597539.novalocal python3[6257]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769522502.7349148-153-160055704337154/source _original_basename=tmpir9nqw_m follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:43 np0005597539.novalocal python3[6357]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:01:44 np0005597539.novalocal python3[6428]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769522503.6596203-183-33342198764100/source _original_basename=tmp_sq_3o30 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:44 np0005597539.novalocal sudo[6528]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yislgqcnqtwwogrpeoiahqcdcaxjyfwq ; /usr/bin/python3'
Jan 27 14:01:45 np0005597539.novalocal sudo[6528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:01:45 np0005597539.novalocal python3[6530]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:01:45 np0005597539.novalocal sudo[6528]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:45 np0005597539.novalocal sudo[6601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqaskzuerwsjhuslubffeufkahiznsyn ; /usr/bin/python3'
Jan 27 14:01:45 np0005597539.novalocal sudo[6601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:01:45 np0005597539.novalocal python3[6603]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769522504.892485-231-161533012035016/source _original_basename=tmpd6c90kac follow=False checksum=56e3bf0f815d31b2efa40abaca4365a38bca1338 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:45 np0005597539.novalocal sudo[6601]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:46 np0005597539.novalocal python3[6651]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:01:46 np0005597539.novalocal python3[6677]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:01:46 np0005597539.novalocal sudo[6755]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyfecxsjjboojiuegcwxtpsbpzldhymn ; /usr/bin/python3'
Jan 27 14:01:46 np0005597539.novalocal sudo[6755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:01:46 np0005597539.novalocal python3[6757]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:01:46 np0005597539.novalocal sudo[6755]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:47 np0005597539.novalocal sudo[6828]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjvgixvfxeeudgyljziurcbcxmnbalvi ; /usr/bin/python3'
Jan 27 14:01:47 np0005597539.novalocal sudo[6828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:01:47 np0005597539.novalocal python3[6830]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769522506.6849952-273-235736450091173/source _original_basename=tmpj8n0jb53 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:47 np0005597539.novalocal sudo[6828]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:47 np0005597539.novalocal sudo[6879]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdmmaccamfqohuuetoiqvebewjbcrjae ; /usr/bin/python3'
Jan 27 14:01:47 np0005597539.novalocal sudo[6879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:01:47 np0005597539.novalocal python3[6881]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-90d8-4b74-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:01:47 np0005597539.novalocal sudo[6879]: pam_unix(sudo:session): session closed for user root
Jan 27 14:01:48 np0005597539.novalocal python3[6909]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-90d8-4b74-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 27 14:01:49 np0005597539.novalocal python3[6938]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:01:55 np0005597539.novalocal irqbalance[816]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 27 14:01:55 np0005597539.novalocal irqbalance[816]: IRQ 27 affinity is now unmanaged
Jan 27 14:02:10 np0005597539.novalocal sudo[6962]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwziwpetxretpjnowhanzxsrwddeobrv ; /usr/bin/python3'
Jan 27 14:02:10 np0005597539.novalocal sudo[6962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:02:10 np0005597539.novalocal python3[6964]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:02:10 np0005597539.novalocal sudo[6962]: pam_unix(sudo:session): session closed for user root
Jan 27 14:02:11 np0005597539.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 27 14:02:44 np0005597539.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 27 14:02:44 np0005597539.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 27 14:02:44 np0005597539.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 27 14:02:44 np0005597539.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 27 14:02:44 np0005597539.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 27 14:02:44 np0005597539.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 27 14:02:44 np0005597539.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 27 14:02:44 np0005597539.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 27 14:02:44 np0005597539.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 27 14:02:44 np0005597539.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 27 14:02:45 np0005597539.novalocal NetworkManager[858]: <info>  [1769522565.0409] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 27 14:02:45 np0005597539.novalocal systemd-udevd[6968]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:02:45 np0005597539.novalocal NetworkManager[858]: <info>  [1769522565.0564] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 14:02:45 np0005597539.novalocal NetworkManager[858]: <info>  [1769522565.0591] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 27 14:02:45 np0005597539.novalocal NetworkManager[858]: <info>  [1769522565.0593] device (eth1): carrier: link connected
Jan 27 14:02:45 np0005597539.novalocal NetworkManager[858]: <info>  [1769522565.0595] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 27 14:02:45 np0005597539.novalocal NetworkManager[858]: <info>  [1769522565.0601] policy: auto-activating connection 'Wired connection 1' (3b3adbdf-4ae1-3614-8d44-182832ec9532)
Jan 27 14:02:45 np0005597539.novalocal NetworkManager[858]: <info>  [1769522565.0604] device (eth1): Activation: starting connection 'Wired connection 1' (3b3adbdf-4ae1-3614-8d44-182832ec9532)
Jan 27 14:02:45 np0005597539.novalocal NetworkManager[858]: <info>  [1769522565.0605] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 14:02:45 np0005597539.novalocal NetworkManager[858]: <info>  [1769522565.0607] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 14:02:45 np0005597539.novalocal NetworkManager[858]: <info>  [1769522565.0610] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 14:02:45 np0005597539.novalocal NetworkManager[858]: <info>  [1769522565.0614] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 27 14:02:45 np0005597539.novalocal python3[6994]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-275c-02f4-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:02:52 np0005597539.novalocal sudo[7072]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quaznqkgpnmlpwjpsivnxaqveisemwuf ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 14:02:52 np0005597539.novalocal sudo[7072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:02:52 np0005597539.novalocal python3[7074]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:02:52 np0005597539.novalocal sudo[7072]: pam_unix(sudo:session): session closed for user root
Jan 27 14:02:53 np0005597539.novalocal sudo[7145]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbhntqrgnzefbwyjusxixrouwocjgmfi ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 14:02:53 np0005597539.novalocal sudo[7145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:02:53 np0005597539.novalocal python3[7147]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769522572.603292-102-173252407908888/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=466afdcfaf0f3c6af7392e6031c1329d7d8341ea backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:02:53 np0005597539.novalocal sudo[7145]: pam_unix(sudo:session): session closed for user root
Jan 27 14:02:53 np0005597539.novalocal sudo[7195]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwchbbjsivejipqwqoipizcaiupfkanz ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 14:02:53 np0005597539.novalocal sudo[7195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:02:54 np0005597539.novalocal python3[7197]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:02:54 np0005597539.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 27 14:02:54 np0005597539.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 27 14:02:54 np0005597539.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 27 14:02:54 np0005597539.novalocal systemd[1]: Stopping Network Manager...
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[858]: <info>  [1769522574.1024] caught SIGTERM, shutting down normally.
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[858]: <info>  [1769522574.1032] dhcp4 (eth0): canceled DHCP transaction
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[858]: <info>  [1769522574.1032] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[858]: <info>  [1769522574.1032] dhcp4 (eth0): state changed no lease
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[858]: <info>  [1769522574.1034] manager: NetworkManager state is now CONNECTING
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[858]: <info>  [1769522574.1157] dhcp4 (eth1): canceled DHCP transaction
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[858]: <info>  [1769522574.1158] dhcp4 (eth1): state changed no lease
Jan 27 14:02:54 np0005597539.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[858]: <info>  [1769522574.1284] exiting (success)
Jan 27 14:02:54 np0005597539.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 14:02:54 np0005597539.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 27 14:02:54 np0005597539.novalocal systemd[1]: Stopped Network Manager.
Jan 27 14:02:54 np0005597539.novalocal systemd[1]: NetworkManager.service: Consumed 1.142s CPU time, 10.2M memory peak.
Jan 27 14:02:54 np0005597539.novalocal systemd[1]: Starting Network Manager...
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.1990] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:3ec64c28-9072-4af9-bb4c-439f11a25520)
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.1992] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2042] manager[0x56144d23f000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 27 14:02:54 np0005597539.novalocal systemd[1]: Starting Hostname Service...
Jan 27 14:02:54 np0005597539.novalocal systemd[1]: Started Hostname Service.
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2809] hostname: hostname: using hostnamed
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2810] hostname: static hostname changed from (none) to "np0005597539.novalocal"
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2815] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2821] manager[0x56144d23f000]: rfkill: Wi-Fi hardware radio set enabled
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2821] manager[0x56144d23f000]: rfkill: WWAN hardware radio set enabled
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2848] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2848] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2848] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2849] manager: Networking is enabled by state file
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2851] settings: Loaded settings plugin: keyfile (internal)
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2854] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2879] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2887] dhcp: init: Using DHCP client 'internal'
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2889] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2894] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2898] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2907] device (lo): Activation: starting connection 'lo' (6256a758-a13a-40c3-b045-d212ec55f25b)
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2913] device (eth0): carrier: link connected
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2916] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2921] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2922] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2927] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2933] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2939] device (eth1): carrier: link connected
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2942] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2946] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (3b3adbdf-4ae1-3614-8d44-182832ec9532) (indicated)
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2946] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2950] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2955] device (eth1): Activation: starting connection 'Wired connection 1' (3b3adbdf-4ae1-3614-8d44-182832ec9532)
Jan 27 14:02:54 np0005597539.novalocal systemd[1]: Started Network Manager.
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2961] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2964] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2967] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2968] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2970] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2973] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2975] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2977] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2979] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2985] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2988] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2995] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.2998] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.3028] dhcp4 (eth0): state changed new lease, address=38.129.56.182
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.3033] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 27 14:02:54 np0005597539.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 27 14:02:54 np0005597539.novalocal sudo[7195]: pam_unix(sudo:session): session closed for user root
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.4177] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.4185] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.4187] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.4192] device (lo): Activation: successful, device activated.
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.4236] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.4238] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.4241] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.4243] device (eth0): Activation: successful, device activated.
Jan 27 14:02:54 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522574.4248] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 27 14:02:54 np0005597539.novalocal python3[7277]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-275c-02f4-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:03:04 np0005597539.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 14:03:13 np0005597539.novalocal systemd[4307]: Starting Mark boot as successful...
Jan 27 14:03:13 np0005597539.novalocal systemd[4307]: Finished Mark boot as successful.
Jan 27 14:03:24 np0005597539.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.2160] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 14:03:39 np0005597539.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 14:03:39 np0005597539.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.2427] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.2433] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.2448] device (eth1): Activation: successful, device activated.
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.2463] manager: startup complete
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.2467] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <warn>  [1769522619.2483] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 27 14:03:39 np0005597539.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.2504] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.2627] dhcp4 (eth1): canceled DHCP transaction
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.2627] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.2628] dhcp4 (eth1): state changed no lease
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.2646] policy: auto-activating connection 'ci-private-network' (48c2b12b-261d-5c47-9095-8385fdd77179)
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.2651] device (eth1): Activation: starting connection 'ci-private-network' (48c2b12b-261d-5c47-9095-8385fdd77179)
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.2653] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.2658] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.2667] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.2678] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.5410] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.5414] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 14:03:39 np0005597539.novalocal NetworkManager[7214]: <info>  [1769522619.5423] device (eth1): Activation: successful, device activated.
Jan 27 14:03:49 np0005597539.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 14:03:54 np0005597539.novalocal sshd-session[4316]: Received disconnect from 38.102.83.114 port 44038:11: disconnected by user
Jan 27 14:03:54 np0005597539.novalocal sshd-session[4316]: Disconnected from user zuul 38.102.83.114 port 44038
Jan 27 14:03:54 np0005597539.novalocal sshd-session[4303]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:03:54 np0005597539.novalocal systemd-logind[820]: Session 1 logged out. Waiting for processes to exit.
Jan 27 14:03:55 np0005597539.novalocal sshd-session[7311]: Accepted publickey for zuul from 38.102.83.114 port 46722 ssh2: RSA SHA256:hk2zKQl968MLJIxLeRmYoL19KGDGKglTIr8JoOEMMCU
Jan 27 14:03:55 np0005597539.novalocal systemd-logind[820]: New session 3 of user zuul.
Jan 27 14:03:55 np0005597539.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 27 14:03:55 np0005597539.novalocal sshd-session[7311]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:03:56 np0005597539.novalocal sudo[7390]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxrwdtswydopvidgipqdacfyesaqrmjc ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 14:03:56 np0005597539.novalocal sudo[7390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:03:56 np0005597539.novalocal python3[7392]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:03:56 np0005597539.novalocal sudo[7390]: pam_unix(sudo:session): session closed for user root
Jan 27 14:03:56 np0005597539.novalocal sudo[7463]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnvxzmoamjaeesfhvvbaualatoanvtxk ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 14:03:56 np0005597539.novalocal sudo[7463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:03:56 np0005597539.novalocal python3[7465]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769522636.0275388-259-251631217022503/source _original_basename=tmp6y31q8cl follow=False checksum=64199d560a76a7fef16bb395dbbb303478fa314c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:03:56 np0005597539.novalocal sudo[7463]: pam_unix(sudo:session): session closed for user root
Jan 27 14:03:58 np0005597539.novalocal sshd-session[7314]: Connection closed by 38.102.83.114 port 46722
Jan 27 14:03:58 np0005597539.novalocal sshd-session[7311]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:03:58 np0005597539.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 27 14:03:58 np0005597539.novalocal systemd-logind[820]: Session 3 logged out. Waiting for processes to exit.
Jan 27 14:03:58 np0005597539.novalocal systemd-logind[820]: Removed session 3.
Jan 27 14:06:13 np0005597539.novalocal systemd[4307]: Created slice User Background Tasks Slice.
Jan 27 14:06:13 np0005597539.novalocal systemd[4307]: Starting Cleanup of User's Temporary Files and Directories...
Jan 27 14:06:13 np0005597539.novalocal systemd[4307]: Finished Cleanup of User's Temporary Files and Directories.
Jan 27 14:12:42 np0005597539.novalocal sshd-session[7496]: Accepted publickey for zuul from 38.102.83.114 port 47374 ssh2: RSA SHA256:hk2zKQl968MLJIxLeRmYoL19KGDGKglTIr8JoOEMMCU
Jan 27 14:12:42 np0005597539.novalocal systemd-logind[820]: New session 4 of user zuul.
Jan 27 14:12:42 np0005597539.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 27 14:12:42 np0005597539.novalocal sshd-session[7496]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:12:42 np0005597539.novalocal sudo[7523]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uatkqlpncypysbomzxqejrcxtpjpsvam ; /usr/bin/python3'
Jan 27 14:12:42 np0005597539.novalocal sudo[7523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:12:42 np0005597539.novalocal python3[7525]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-c916-8a4f-000000002187-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:12:43 np0005597539.novalocal sudo[7523]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:43 np0005597539.novalocal sudo[7552]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvitzbarvrupphfihcuihhokolxgnrsm ; /usr/bin/python3'
Jan 27 14:12:43 np0005597539.novalocal sudo[7552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:12:43 np0005597539.novalocal python3[7554]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:12:43 np0005597539.novalocal sudo[7552]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:43 np0005597539.novalocal sudo[7578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkyfmayetzovqkswimrsuxccflhtagmm ; /usr/bin/python3'
Jan 27 14:12:43 np0005597539.novalocal sudo[7578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:12:43 np0005597539.novalocal python3[7580]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:12:43 np0005597539.novalocal sudo[7578]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:43 np0005597539.novalocal sudo[7604]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaqhdlltxehztsfuwkuacmhjzqdsbafr ; /usr/bin/python3'
Jan 27 14:12:43 np0005597539.novalocal sudo[7604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:12:43 np0005597539.novalocal python3[7606]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:12:43 np0005597539.novalocal sudo[7604]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:43 np0005597539.novalocal sudo[7630]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwpizewxmyzjxqzghgzlgunigigekera ; /usr/bin/python3'
Jan 27 14:12:43 np0005597539.novalocal sudo[7630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:12:44 np0005597539.novalocal python3[7632]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:12:44 np0005597539.novalocal sudo[7630]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:44 np0005597539.novalocal sudo[7656]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmlwxahfpaolvwsutbqoctyammerjybx ; /usr/bin/python3'
Jan 27 14:12:44 np0005597539.novalocal sudo[7656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:12:44 np0005597539.novalocal python3[7658]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:12:44 np0005597539.novalocal sudo[7656]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:44 np0005597539.novalocal sudo[7734]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssclldkauobgimrakzisdrbonijogelk ; /usr/bin/python3'
Jan 27 14:12:44 np0005597539.novalocal sudo[7734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:12:45 np0005597539.novalocal python3[7736]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:12:45 np0005597539.novalocal sudo[7734]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:45 np0005597539.novalocal sudo[7807]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eldblazmmjgyvomrqxiykkkxuesfxgxx ; /usr/bin/python3'
Jan 27 14:12:45 np0005597539.novalocal sudo[7807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:12:45 np0005597539.novalocal python3[7809]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769523164.7755916-516-230133743924808/source _original_basename=tmp1zweo_0f follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:12:45 np0005597539.novalocal sudo[7807]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:46 np0005597539.novalocal sudo[7857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcfhhepptqhvcvikqklytrxtjamkrxgi ; /usr/bin/python3'
Jan 27 14:12:46 np0005597539.novalocal sudo[7857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:12:46 np0005597539.novalocal python3[7859]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 14:12:46 np0005597539.novalocal systemd[1]: Reloading.
Jan 27 14:12:46 np0005597539.novalocal systemd-rc-local-generator[7880]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:12:46 np0005597539.novalocal sudo[7857]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:47 np0005597539.novalocal sudo[7913]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mslblwnrypctkynwfmghkbbqrajxoduq ; /usr/bin/python3'
Jan 27 14:12:47 np0005597539.novalocal sudo[7913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:12:47 np0005597539.novalocal python3[7915]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 27 14:12:47 np0005597539.novalocal sudo[7913]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:48 np0005597539.novalocal sudo[7939]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajfucoiqsoxsprwefyadmyrbwovyweyr ; /usr/bin/python3'
Jan 27 14:12:48 np0005597539.novalocal sudo[7939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:12:48 np0005597539.novalocal python3[7941]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:12:48 np0005597539.novalocal sudo[7939]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:48 np0005597539.novalocal sudo[7967]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptqdavgrtnapkyrcjdffyphifhyeexmj ; /usr/bin/python3'
Jan 27 14:12:48 np0005597539.novalocal sudo[7967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:12:48 np0005597539.novalocal python3[7969]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:12:48 np0005597539.novalocal sudo[7967]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:48 np0005597539.novalocal sudo[7995]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afwltotumnxvblfryzdphmeexzwyvzzs ; /usr/bin/python3'
Jan 27 14:12:48 np0005597539.novalocal sudo[7995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:12:48 np0005597539.novalocal python3[7997]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:12:48 np0005597539.novalocal sudo[7995]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:48 np0005597539.novalocal sudo[8023]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-judotvjnmzzpsyvbnhgjwlvivuyyqyrw ; /usr/bin/python3'
Jan 27 14:12:48 np0005597539.novalocal sudo[8023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:12:49 np0005597539.novalocal python3[8025]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:12:49 np0005597539.novalocal sudo[8023]: pam_unix(sudo:session): session closed for user root
Jan 27 14:12:49 np0005597539.novalocal python3[8052]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-c916-8a4f-00000000218e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:12:50 np0005597539.novalocal python3[8082]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 14:12:51 np0005597539.novalocal sshd-session[7499]: Connection closed by 38.102.83.114 port 47374
Jan 27 14:12:51 np0005597539.novalocal sshd-session[7496]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:12:51 np0005597539.novalocal systemd-logind[820]: Session 4 logged out. Waiting for processes to exit.
Jan 27 14:12:51 np0005597539.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 27 14:12:51 np0005597539.novalocal systemd[1]: session-4.scope: Consumed 4.179s CPU time.
Jan 27 14:12:51 np0005597539.novalocal systemd-logind[820]: Removed session 4.
Jan 27 14:12:53 np0005597539.novalocal sshd-session[8089]: Accepted publickey for zuul from 38.102.83.114 port 49990 ssh2: RSA SHA256:hk2zKQl968MLJIxLeRmYoL19KGDGKglTIr8JoOEMMCU
Jan 27 14:12:53 np0005597539.novalocal systemd-logind[820]: New session 5 of user zuul.
Jan 27 14:12:53 np0005597539.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 27 14:12:53 np0005597539.novalocal sshd-session[8089]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:12:53 np0005597539.novalocal sudo[8116]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqivwsdklaewocytnzaewzrntweeetey ; /usr/bin/python3'
Jan 27 14:12:53 np0005597539.novalocal sudo[8116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:12:53 np0005597539.novalocal python3[8118]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 27 14:12:59 np0005597539.novalocal setsebool[8157]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 27 14:12:59 np0005597539.novalocal setsebool[8157]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 27 14:13:11 np0005597539.novalocal kernel: SELinux:  Converting 386 SID table entries...
Jan 27 14:13:11 np0005597539.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 14:13:11 np0005597539.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 27 14:13:11 np0005597539.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 14:13:11 np0005597539.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 27 14:13:11 np0005597539.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 14:13:11 np0005597539.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 14:13:11 np0005597539.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 14:13:21 np0005597539.novalocal kernel: SELinux:  Converting 389 SID table entries...
Jan 27 14:13:21 np0005597539.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 14:13:21 np0005597539.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 27 14:13:21 np0005597539.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 14:13:21 np0005597539.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 27 14:13:21 np0005597539.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 14:13:21 np0005597539.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 14:13:21 np0005597539.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 14:13:40 np0005597539.novalocal dbus-broker-launch[811]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 27 14:13:40 np0005597539.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 14:13:40 np0005597539.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 27 14:13:40 np0005597539.novalocal systemd[1]: Reloading.
Jan 27 14:13:40 np0005597539.novalocal systemd-rc-local-generator[8923]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:13:40 np0005597539.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 14:13:41 np0005597539.novalocal sudo[8116]: pam_unix(sudo:session): session closed for user root
Jan 27 14:13:51 np0005597539.novalocal python3[15180]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-3e7f-1883-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:13:52 np0005597539.novalocal kernel: evm: overlay not supported
Jan 27 14:13:52 np0005597539.novalocal systemd[4307]: Starting D-Bus User Message Bus...
Jan 27 14:13:52 np0005597539.novalocal dbus-broker-launch[15688]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 27 14:13:52 np0005597539.novalocal dbus-broker-launch[15688]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 27 14:13:52 np0005597539.novalocal systemd[4307]: Started D-Bus User Message Bus.
Jan 27 14:13:52 np0005597539.novalocal dbus-broker-lau[15688]: Ready
Jan 27 14:13:52 np0005597539.novalocal systemd[4307]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 27 14:13:52 np0005597539.novalocal systemd[4307]: Created slice Slice /user.
Jan 27 14:13:52 np0005597539.novalocal systemd[4307]: podman-15629.scope: unit configures an IP firewall, but not running as root.
Jan 27 14:13:52 np0005597539.novalocal systemd[4307]: (This warning is only shown for the first unit using IP firewalling.)
Jan 27 14:13:52 np0005597539.novalocal systemd[4307]: Started podman-15629.scope.
Jan 27 14:13:52 np0005597539.novalocal systemd[4307]: Started podman-pause-15c70bb0.scope.
Jan 27 14:13:53 np0005597539.novalocal sudo[15991]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffazxrwpfuugeagaqdcojppqyjjalfde ; /usr/bin/python3'
Jan 27 14:13:53 np0005597539.novalocal sudo[15991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:13:53 np0005597539.novalocal python3[16004]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.129.56.242:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.129.56.242:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:13:53 np0005597539.novalocal python3[16004]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 27 14:13:53 np0005597539.novalocal sudo[15991]: pam_unix(sudo:session): session closed for user root
Jan 27 14:13:53 np0005597539.novalocal sshd-session[8092]: Connection closed by 38.102.83.114 port 49990
Jan 27 14:13:53 np0005597539.novalocal sshd-session[8089]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:13:53 np0005597539.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 27 14:13:53 np0005597539.novalocal systemd[1]: session-5.scope: Consumed 43.525s CPU time.
Jan 27 14:13:53 np0005597539.novalocal systemd-logind[820]: Session 5 logged out. Waiting for processes to exit.
Jan 27 14:13:53 np0005597539.novalocal systemd-logind[820]: Removed session 5.
Jan 27 14:14:15 np0005597539.novalocal sshd-session[23140]: Unable to negotiate with 38.129.56.249 port 47876: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 27 14:14:15 np0005597539.novalocal sshd-session[23142]: Connection closed by 38.129.56.249 port 47860 [preauth]
Jan 27 14:14:15 np0005597539.novalocal sshd-session[23141]: Connection closed by 38.129.56.249 port 47862 [preauth]
Jan 27 14:14:15 np0005597539.novalocal sshd-session[23144]: Unable to negotiate with 38.129.56.249 port 47874: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 27 14:14:15 np0005597539.novalocal sshd-session[23145]: Unable to negotiate with 38.129.56.249 port 47882: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 27 14:14:20 np0005597539.novalocal sshd-session[24306]: Accepted publickey for zuul from 38.102.83.114 port 56488 ssh2: RSA SHA256:hk2zKQl968MLJIxLeRmYoL19KGDGKglTIr8JoOEMMCU
Jan 27 14:14:20 np0005597539.novalocal systemd-logind[820]: New session 6 of user zuul.
Jan 27 14:14:20 np0005597539.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 27 14:14:20 np0005597539.novalocal sshd-session[24306]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:14:20 np0005597539.novalocal python3[24413]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCmWoIOGkc05xcMBLpYGqrhtR5F/jhvcC1BwI6sDTNz8ErkFd2kYEboy8XWshhcO5Fraz0f8fNaofUs/3G34ZZg= zuul@np0005597538.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:14:21 np0005597539.novalocal sudo[24611]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maiskbwbfkeamaxubaxwgcwtlnzqaola ; /usr/bin/python3'
Jan 27 14:14:21 np0005597539.novalocal sudo[24611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:14:21 np0005597539.novalocal python3[24618]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCmWoIOGkc05xcMBLpYGqrhtR5F/jhvcC1BwI6sDTNz8ErkFd2kYEboy8XWshhcO5Fraz0f8fNaofUs/3G34ZZg= zuul@np0005597538.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:14:21 np0005597539.novalocal sudo[24611]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:21 np0005597539.novalocal sudo[24961]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxwkjugpcqrwxileonifxligaquxcbwe ; /usr/bin/python3'
Jan 27 14:14:21 np0005597539.novalocal sudo[24961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:14:22 np0005597539.novalocal python3[24967]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005597539.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 27 14:14:22 np0005597539.novalocal useradd[25042]: new group: name=cloud-admin, GID=1002
Jan 27 14:14:22 np0005597539.novalocal useradd[25042]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 27 14:14:22 np0005597539.novalocal sudo[24961]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:22 np0005597539.novalocal sudo[25191]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcmstixzwovaqdhsaompvaytpeoxzvpw ; /usr/bin/python3'
Jan 27 14:14:22 np0005597539.novalocal sudo[25191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:14:22 np0005597539.novalocal python3[25200]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCmWoIOGkc05xcMBLpYGqrhtR5F/jhvcC1BwI6sDTNz8ErkFd2kYEboy8XWshhcO5Fraz0f8fNaofUs/3G34ZZg= zuul@np0005597538.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 14:14:22 np0005597539.novalocal sudo[25191]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:23 np0005597539.novalocal sudo[25401]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjrzqdaqhgvkqouggafmzzllkjuzzbfv ; /usr/bin/python3'
Jan 27 14:14:23 np0005597539.novalocal sudo[25401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:14:23 np0005597539.novalocal python3[25407]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:14:23 np0005597539.novalocal sudo[25401]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:23 np0005597539.novalocal sudo[25627]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpyzarshacxscvckfzhlfllaepdqmeqg ; /usr/bin/python3'
Jan 27 14:14:23 np0005597539.novalocal sudo[25627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:14:23 np0005597539.novalocal python3[25634]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769523262.956938-135-18356332409994/source _original_basename=tmpxayazrez follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:14:23 np0005597539.novalocal sudo[25627]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:24 np0005597539.novalocal sudo[25914]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-texqjhqsesvbctarhpzzxgctzxibwyix ; /usr/bin/python3'
Jan 27 14:14:24 np0005597539.novalocal sudo[25914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:14:24 np0005597539.novalocal python3[25921]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 27 14:14:24 np0005597539.novalocal systemd[1]: Starting Hostname Service...
Jan 27 14:14:24 np0005597539.novalocal systemd[1]: Started Hostname Service.
Jan 27 14:14:24 np0005597539.novalocal systemd-hostnamed[26001]: Changed pretty hostname to 'compute-0'
Jan 27 14:14:24 compute-0 systemd-hostnamed[26001]: Hostname set to <compute-0> (static)
Jan 27 14:14:24 compute-0 NetworkManager[7214]: <info>  [1769523264.6992] hostname: static hostname changed from "np0005597539.novalocal" to "compute-0"
Jan 27 14:14:24 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 14:14:24 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 14:14:24 compute-0 sudo[25914]: pam_unix(sudo:session): session closed for user root
Jan 27 14:14:25 compute-0 sshd-session[24362]: Connection closed by 38.102.83.114 port 56488
Jan 27 14:14:25 compute-0 sshd-session[24306]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:14:25 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 27 14:14:25 compute-0 systemd[1]: session-6.scope: Consumed 2.294s CPU time.
Jan 27 14:14:25 compute-0 systemd-logind[820]: Session 6 logged out. Waiting for processes to exit.
Jan 27 14:14:25 compute-0 systemd-logind[820]: Removed session 6.
Jan 27 14:14:34 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 14:14:37 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 14:14:37 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 14:14:37 compute-0 systemd[1]: man-db-cache-update.service: Consumed 59.866s CPU time.
Jan 27 14:14:37 compute-0 systemd[1]: run-rd880943202d741a6b54e012f92faed30.service: Deactivated successfully.
Jan 27 14:14:54 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 14:16:03 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 27 14:16:03 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 27 14:16:03 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 27 14:16:03 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 27 14:20:57 compute-0 systemd[1]: Starting dnf makecache...
Jan 27 14:20:57 compute-0 sshd-session[29935]: Accepted publickey for zuul from 38.129.56.249 port 56970 ssh2: RSA SHA256:hk2zKQl968MLJIxLeRmYoL19KGDGKglTIr8JoOEMMCU
Jan 27 14:20:57 compute-0 systemd-logind[820]: New session 7 of user zuul.
Jan 27 14:20:57 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 27 14:20:57 compute-0 sshd-session[29935]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:20:57 compute-0 dnf[29937]: Failed determining last makecache time.
Jan 27 14:20:58 compute-0 python3[30013]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:20:58 compute-0 dnf[29937]: CentOS Stream 9 - BaseOS                        7.0 kB/s | 6.7 kB     00:00
Jan 27 14:20:58 compute-0 dnf[29937]: CentOS Stream 9 - AppStream                      67 kB/s | 6.8 kB     00:00
Jan 27 14:20:59 compute-0 dnf[29937]: CentOS Stream 9 - CRB                            69 kB/s | 6.6 kB     00:00
Jan 27 14:20:59 compute-0 dnf[29937]: CentOS Stream 9 - Extras packages                74 kB/s | 7.3 kB     00:00
Jan 27 14:20:59 compute-0 dnf[29937]: Metadata cache created.
Jan 27 14:20:59 compute-0 sudo[30130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcqunrcbwrjhdqzvlljncsaloampgrjl ; /usr/bin/python3'
Jan 27 14:20:59 compute-0 sudo[30130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:20:59 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 27 14:20:59 compute-0 systemd[1]: Finished dnf makecache.
Jan 27 14:20:59 compute-0 python3[30132]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:20:59 compute-0 sudo[30130]: pam_unix(sudo:session): session closed for user root
Jan 27 14:20:59 compute-0 sudo[30204]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngwwtxmwfcgddnqqmmvnsvqdroijaoel ; /usr/bin/python3'
Jan 27 14:20:59 compute-0 sudo[30204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:21:00 compute-0 python3[30206]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769523659.2917402-33634-171570425683326/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:21:00 compute-0 sudo[30204]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:00 compute-0 sudo[30230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxfbvxfwzwqvtzoxkgkvowjuvlplirin ; /usr/bin/python3'
Jan 27 14:21:00 compute-0 sudo[30230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:21:00 compute-0 python3[30232]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:21:00 compute-0 sudo[30230]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:00 compute-0 sudo[30303]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asxsvsghqgsamwfbsmhwqoveusgafkbx ; /usr/bin/python3'
Jan 27 14:21:00 compute-0 sudo[30303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:21:00 compute-0 python3[30305]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769523659.2917402-33634-171570425683326/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:21:00 compute-0 sudo[30303]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:00 compute-0 sudo[30329]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afdcvpuzvlyvrlnsktwnehfrseckaejc ; /usr/bin/python3'
Jan 27 14:21:00 compute-0 sudo[30329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:21:00 compute-0 python3[30331]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:21:00 compute-0 sudo[30329]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:01 compute-0 sudo[30402]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlimvyqpnygijymzqurmtxjdyljlghzh ; /usr/bin/python3'
Jan 27 14:21:01 compute-0 sudo[30402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:21:01 compute-0 python3[30404]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769523659.2917402-33634-171570425683326/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:21:01 compute-0 sudo[30402]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:01 compute-0 sudo[30428]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljwlfappityqabbgrlhcdgjdbxovreqy ; /usr/bin/python3'
Jan 27 14:21:01 compute-0 sudo[30428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:21:01 compute-0 python3[30430]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:21:01 compute-0 sudo[30428]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:01 compute-0 sudo[30501]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiruklsehxiogmdbnwidsgycvuzreekf ; /usr/bin/python3'
Jan 27 14:21:01 compute-0 sudo[30501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:21:02 compute-0 python3[30503]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769523659.2917402-33634-171570425683326/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:21:02 compute-0 sudo[30501]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:02 compute-0 sudo[30527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnyyrtqzdawrymxbfkaiuaapptufvzcx ; /usr/bin/python3'
Jan 27 14:21:02 compute-0 sudo[30527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:21:02 compute-0 python3[30529]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:21:02 compute-0 sudo[30527]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:02 compute-0 sudo[30600]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgiekxlgniugkrgfzrhckguoiuipgzyh ; /usr/bin/python3'
Jan 27 14:21:02 compute-0 sudo[30600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:21:02 compute-0 python3[30602]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769523659.2917402-33634-171570425683326/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:21:02 compute-0 sudo[30600]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:02 compute-0 sudo[30626]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irhnotcospdpajqsquesrrephyzfgjli ; /usr/bin/python3'
Jan 27 14:21:02 compute-0 sudo[30626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:21:02 compute-0 python3[30628]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:21:02 compute-0 sudo[30626]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:03 compute-0 sudo[30699]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nueetblzeazudhckzjsqdqteyugufjjj ; /usr/bin/python3'
Jan 27 14:21:03 compute-0 sudo[30699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:21:03 compute-0 python3[30701]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769523659.2917402-33634-171570425683326/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:21:03 compute-0 sudo[30699]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:03 compute-0 sudo[30725]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsfzskhuqoxtehqgshouirafqlgguyku ; /usr/bin/python3'
Jan 27 14:21:03 compute-0 sudo[30725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:21:03 compute-0 python3[30727]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 14:21:03 compute-0 sudo[30725]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:03 compute-0 sudo[30798]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klvrawwpqfhdooafnzkxofqvuwlopsdp ; /usr/bin/python3'
Jan 27 14:21:03 compute-0 sudo[30798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:21:03 compute-0 python3[30800]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769523659.2917402-33634-171570425683326/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:21:03 compute-0 sudo[30798]: pam_unix(sudo:session): session closed for user root
Jan 27 14:21:06 compute-0 sshd-session[30825]: Connection closed by 192.168.122.11 port 54092 [preauth]
Jan 27 14:21:06 compute-0 sshd-session[30827]: Connection closed by 192.168.122.11 port 54096 [preauth]
Jan 27 14:21:06 compute-0 sshd-session[30826]: Unable to negotiate with 192.168.122.11 port 54106: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 27 14:21:06 compute-0 sshd-session[30828]: Unable to negotiate with 192.168.122.11 port 54120: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 27 14:21:06 compute-0 sshd-session[30829]: Unable to negotiate with 192.168.122.11 port 54136: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 27 14:24:29 compute-0 python3[30860]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:29:29 compute-0 sshd-session[29939]: Received disconnect from 38.129.56.249 port 56970:11: disconnected by user
Jan 27 14:29:29 compute-0 sshd-session[29939]: Disconnected from user zuul 38.129.56.249 port 56970
Jan 27 14:29:29 compute-0 sshd-session[29935]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:29:29 compute-0 systemd-logind[820]: Session 7 logged out. Waiting for processes to exit.
Jan 27 14:29:29 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 27 14:29:29 compute-0 systemd[1]: session-7.scope: Consumed 5.131s CPU time.
Jan 27 14:29:29 compute-0 systemd-logind[820]: Removed session 7.
Jan 27 14:40:07 compute-0 sshd-session[30869]: Accepted publickey for zuul from 192.168.122.30 port 43274 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:40:07 compute-0 systemd-logind[820]: New session 8 of user zuul.
Jan 27 14:40:07 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 27 14:40:07 compute-0 sshd-session[30869]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:40:08 compute-0 python3.9[31022]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:40:09 compute-0 sudo[31201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpxejfnizxaupibilzgzukmvaotxysfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524809.4100444-27-236178338097162/AnsiballZ_command.py'
Jan 27 14:40:09 compute-0 sudo[31201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:40:10 compute-0 python3.9[31203]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:40:17 compute-0 sudo[31201]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:18 compute-0 sshd-session[30872]: Connection closed by 192.168.122.30 port 43274
Jan 27 14:40:18 compute-0 sshd-session[30869]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:40:18 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 27 14:40:18 compute-0 systemd[1]: session-8.scope: Consumed 8.279s CPU time.
Jan 27 14:40:18 compute-0 systemd-logind[820]: Session 8 logged out. Waiting for processes to exit.
Jan 27 14:40:18 compute-0 systemd-logind[820]: Removed session 8.
Jan 27 14:40:24 compute-0 sshd-session[31261]: Accepted publickey for zuul from 192.168.122.30 port 48176 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:40:24 compute-0 systemd-logind[820]: New session 9 of user zuul.
Jan 27 14:40:24 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 27 14:40:24 compute-0 sshd-session[31261]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:40:25 compute-0 python3.9[31414]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:40:25 compute-0 sshd-session[31264]: Connection closed by 192.168.122.30 port 48176
Jan 27 14:40:25 compute-0 sshd-session[31261]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:40:25 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 27 14:40:25 compute-0 systemd-logind[820]: Session 9 logged out. Waiting for processes to exit.
Jan 27 14:40:25 compute-0 systemd-logind[820]: Removed session 9.
Jan 27 14:40:42 compute-0 sshd-session[31443]: Accepted publickey for zuul from 192.168.122.30 port 58600 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:40:42 compute-0 systemd-logind[820]: New session 10 of user zuul.
Jan 27 14:40:42 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 27 14:40:42 compute-0 sshd-session[31443]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:40:43 compute-0 python3.9[31596]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 27 14:40:44 compute-0 python3.9[31770]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:40:45 compute-0 sudo[31920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zicwjkaiegdovshstwpztnwpybiyvies ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524845.0902693-40-251385232613694/AnsiballZ_command.py'
Jan 27 14:40:45 compute-0 sudo[31920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:40:45 compute-0 python3.9[31922]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:40:45 compute-0 sudo[31920]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:46 compute-0 sudo[32073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwqsqoqwyraooosrjjprzoofuebghpbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524846.1294832-52-34221352412981/AnsiballZ_stat.py'
Jan 27 14:40:46 compute-0 sudo[32073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:40:46 compute-0 python3.9[32075]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:40:46 compute-0 sudo[32073]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:47 compute-0 sudo[32225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voqcpzjyxzkfsbmelulhlkwhmqhihfrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524847.058579-60-272334212224800/AnsiballZ_file.py'
Jan 27 14:40:47 compute-0 sudo[32225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:40:47 compute-0 python3.9[32227]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:40:47 compute-0 sudo[32225]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:48 compute-0 sudo[32377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvwmgwepdktgchhrozqhtfxgyssstyay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524848.0042698-68-78701556662172/AnsiballZ_stat.py'
Jan 27 14:40:48 compute-0 sudo[32377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:40:48 compute-0 python3.9[32379]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:40:48 compute-0 sudo[32377]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:49 compute-0 sudo[32500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kewhgtibithjdqdzsgadvlwjdjbcgtws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524848.0042698-68-78701556662172/AnsiballZ_copy.py'
Jan 27 14:40:49 compute-0 sudo[32500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:40:49 compute-0 python3.9[32502]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769524848.0042698-68-78701556662172/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:40:49 compute-0 sudo[32500]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:49 compute-0 sudo[32652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueyqxqkwiqsjngbqajnambidgxkozlzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524849.6026528-83-173015834873782/AnsiballZ_setup.py'
Jan 27 14:40:49 compute-0 sudo[32652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:40:50 compute-0 python3.9[32654]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:40:50 compute-0 sudo[32652]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:51 compute-0 sudo[32808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsajzitftufdqpeivzpuovqkwqccgqcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524850.5991716-91-13755788093168/AnsiballZ_file.py'
Jan 27 14:40:51 compute-0 sudo[32808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:40:51 compute-0 python3.9[32810]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:40:51 compute-0 sudo[32808]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:51 compute-0 sudo[32960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yupbbsfjkgusujwpdsknturbokuilgqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524851.6229262-100-209001088567417/AnsiballZ_file.py'
Jan 27 14:40:51 compute-0 sudo[32960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:40:52 compute-0 python3.9[32962]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:40:52 compute-0 sudo[32960]: pam_unix(sudo:session): session closed for user root
Jan 27 14:40:53 compute-0 python3.9[33112]: ansible-ansible.builtin.service_facts Invoked
Jan 27 14:40:57 compute-0 python3.9[33365]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:40:58 compute-0 python3.9[33515]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:40:59 compute-0 python3.9[33669]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:41:00 compute-0 sudo[33825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwjlnojfhuzhgsnrydhzjcbacivwzoak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524860.3971407-148-235167301790417/AnsiballZ_setup.py'
Jan 27 14:41:00 compute-0 sudo[33825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:00 compute-0 python3.9[33827]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 14:41:01 compute-0 sudo[33825]: pam_unix(sudo:session): session closed for user root
Jan 27 14:41:01 compute-0 sudo[33909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swbuifuvowegwjznnlystfkletiimucy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524860.3971407-148-235167301790417/AnsiballZ_dnf.py'
Jan 27 14:41:01 compute-0 sudo[33909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:41:01 compute-0 python3.9[33911]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:41:16 compute-0 sshd-session[33985]: banner exchange: Connection from 165.154.218.158 port 39022: invalid format
Jan 27 14:41:34 compute-0 sshd-session[33986]: Connection closed by 165.154.218.158 port 39030
Jan 27 14:41:35 compute-0 sshd-session[34059]: Connection closed by 165.154.218.158 port 35754 [preauth]
Jan 27 14:41:35 compute-0 sshd-session[34061]: error: Protocol major versions differ: 2 vs. 1
Jan 27 14:41:35 compute-0 sshd-session[34061]: banner exchange: Connection from 165.154.218.158 port 35756: could not read protocol version
Jan 27 14:41:50 compute-0 systemd[1]: Reloading.
Jan 27 14:41:50 compute-0 systemd-rc-local-generator[34116]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:41:50 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 27 14:41:51 compute-0 systemd[1]: Reloading.
Jan 27 14:41:51 compute-0 systemd-rc-local-generator[34155]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:41:51 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 27 14:41:51 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 27 14:41:51 compute-0 systemd[1]: Reloading.
Jan 27 14:41:51 compute-0 systemd-rc-local-generator[34195]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:41:51 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 27 14:41:52 compute-0 dbus-broker-launch[810]: Noticed file-system modification, trigger reload.
Jan 27 14:41:52 compute-0 dbus-broker-launch[810]: Noticed file-system modification, trigger reload.
Jan 27 14:41:52 compute-0 dbus-broker-launch[810]: Noticed file-system modification, trigger reload.
Jan 27 14:42:01 compute-0 anacron[4406]: Job `cron.daily' started
Jan 27 14:42:01 compute-0 anacron[4406]: Job `cron.daily' terminated
Jan 27 14:43:00 compute-0 kernel: SELinux:  Converting 2725 SID table entries...
Jan 27 14:43:00 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 14:43:00 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 14:43:00 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 14:43:00 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 14:43:00 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 14:43:00 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 14:43:00 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 14:43:01 compute-0 dbus-broker-launch[811]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 27 14:43:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 14:43:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 14:43:01 compute-0 systemd[1]: Reloading.
Jan 27 14:43:01 compute-0 systemd-rc-local-generator[34514]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:43:01 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 14:43:01 compute-0 sudo[33909]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:02 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 14:43:02 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 14:43:02 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.103s CPU time.
Jan 27 14:43:02 compute-0 systemd[1]: run-r5f2f57b39b314db8bbb27adf0e06327b.service: Deactivated successfully.
Jan 27 14:43:02 compute-0 sudo[35424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfatnvolmssoqbwfevvqkiwjtqawshci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524982.0503876-160-101484142688246/AnsiballZ_command.py'
Jan 27 14:43:02 compute-0 sudo[35424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:02 compute-0 python3.9[35427]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:43:03 compute-0 sudo[35424]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:04 compute-0 sudo[35707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rywvdwceacunwuzfkzjfdziofglmdaaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524983.8224108-168-277472414400902/AnsiballZ_selinux.py'
Jan 27 14:43:04 compute-0 sudo[35707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:04 compute-0 python3.9[35709]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 27 14:43:04 compute-0 sudo[35707]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:05 compute-0 sudo[35859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncmaapyiasspoizgtjoyzounxvshgrzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524985.207187-179-260981749142809/AnsiballZ_command.py'
Jan 27 14:43:05 compute-0 sudo[35859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:05 compute-0 python3.9[35861]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 27 14:43:06 compute-0 sudo[35859]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:07 compute-0 sudo[36012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzhtvgkykmukckgfatbdfxifzntoolze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524986.7095191-187-271636683292249/AnsiballZ_file.py'
Jan 27 14:43:07 compute-0 sudo[36012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:08 compute-0 python3.9[36014]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:43:08 compute-0 sudo[36012]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:08 compute-0 sudo[36165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jweectbcyolrvjkxxmpdkokrlnbpkeqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524988.386845-195-108912328874317/AnsiballZ_mount.py'
Jan 27 14:43:08 compute-0 sudo[36165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:09 compute-0 python3.9[36167]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 27 14:43:09 compute-0 sudo[36165]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:10 compute-0 sudo[36317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdufqwqzchwbonktboibgkjruqntzqdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524990.0436826-223-29900315062078/AnsiballZ_file.py'
Jan 27 14:43:10 compute-0 sudo[36317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:10 compute-0 python3.9[36319]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:43:10 compute-0 sudo[36317]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:10 compute-0 sudo[36469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wexcyeorlduqlufaajbwtwxlypfsfjdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524990.7244728-231-147182145483833/AnsiballZ_stat.py'
Jan 27 14:43:10 compute-0 sudo[36469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:11 compute-0 python3.9[36471]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:43:11 compute-0 sudo[36469]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:11 compute-0 sudo[36592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lapykqpangyiwsljljjgjpxhiksmqxpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524990.7244728-231-147182145483833/AnsiballZ_copy.py'
Jan 27 14:43:11 compute-0 sudo[36592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:11 compute-0 python3.9[36594]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769524990.7244728-231-147182145483833/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2f887b8856f7683bf37464f08df3e925386e9ebd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:43:11 compute-0 sudo[36592]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:12 compute-0 sudo[36744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtbwbknpexhturuazdweaudwpaitjgkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524992.3007262-255-35792944397037/AnsiballZ_stat.py'
Jan 27 14:43:12 compute-0 sudo[36744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:14 compute-0 python3.9[36746]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:43:14 compute-0 sudo[36744]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:15 compute-0 sudo[36896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qriwqokjohujvigeuketycdxsogiqmkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524995.063448-263-103688313176870/AnsiballZ_command.py'
Jan 27 14:43:15 compute-0 sudo[36896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:15 compute-0 python3.9[36898]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:43:15 compute-0 sudo[36896]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:16 compute-0 sudo[37049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnuffysasexluqapcnelhiiebyvlcgao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524995.8529334-271-60522493748814/AnsiballZ_file.py'
Jan 27 14:43:16 compute-0 sudo[37049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:16 compute-0 python3.9[37051]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:43:16 compute-0 sudo[37049]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:17 compute-0 sudo[37201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kugbofmyulldmufkbcmfsajeglzxvety ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524997.225477-282-234439241881546/AnsiballZ_getent.py'
Jan 27 14:43:17 compute-0 sudo[37201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:17 compute-0 python3.9[37203]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 27 14:43:17 compute-0 sudo[37201]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:17 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:43:18 compute-0 sudo[37355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipdefdpfcerosjcerqpgkmtntdkujkfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524998.1614375-290-219465065208568/AnsiballZ_group.py'
Jan 27 14:43:18 compute-0 sudo[37355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:18 compute-0 python3.9[37357]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 14:43:18 compute-0 groupadd[37358]: group added to /etc/group: name=qemu, GID=107
Jan 27 14:43:19 compute-0 groupadd[37358]: group added to /etc/gshadow: name=qemu
Jan 27 14:43:19 compute-0 groupadd[37358]: new group: name=qemu, GID=107
Jan 27 14:43:19 compute-0 sudo[37355]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:19 compute-0 sudo[37513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enwgsydkxstioqursevrtasamvstsjja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769524999.2708611-298-144331424926216/AnsiballZ_user.py'
Jan 27 14:43:19 compute-0 sudo[37513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:20 compute-0 python3.9[37515]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 14:43:20 compute-0 useradd[37517]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 27 14:43:20 compute-0 sudo[37513]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:21 compute-0 sudo[37673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mquxrviqwykhhrqmfalrhqvejinpgxjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525000.8988705-306-246969190474035/AnsiballZ_getent.py'
Jan 27 14:43:21 compute-0 sudo[37673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:21 compute-0 python3.9[37675]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 27 14:43:21 compute-0 sudo[37673]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:22 compute-0 sudo[37826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgucyqrwdmsgxwdczczovovlnuvicsxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525001.719676-314-217648377704930/AnsiballZ_group.py'
Jan 27 14:43:22 compute-0 sudo[37826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:22 compute-0 python3.9[37828]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 14:43:22 compute-0 groupadd[37829]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 27 14:43:22 compute-0 groupadd[37829]: group added to /etc/gshadow: name=hugetlbfs
Jan 27 14:43:22 compute-0 groupadd[37829]: new group: name=hugetlbfs, GID=42477
Jan 27 14:43:22 compute-0 sudo[37826]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:23 compute-0 sudo[37984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnumxwsszjbnqwknpzqcvmbqttgxpwit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525002.8011355-323-100739451163506/AnsiballZ_file.py'
Jan 27 14:43:23 compute-0 sudo[37984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:23 compute-0 python3.9[37986]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 27 14:43:23 compute-0 sudo[37984]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:23 compute-0 sudo[38136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swvekatcfemqbwchtvodxcupllmtmhce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525003.6828024-334-130089028893794/AnsiballZ_dnf.py'
Jan 27 14:43:23 compute-0 sudo[38136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:24 compute-0 python3.9[38138]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:43:26 compute-0 sudo[38136]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:26 compute-0 sudo[38289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kurojpmxmvuiwsscycttgttpxupvbhof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525006.6783817-342-199619736705837/AnsiballZ_file.py'
Jan 27 14:43:26 compute-0 sudo[38289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:27 compute-0 python3.9[38291]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:43:27 compute-0 sudo[38289]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:27 compute-0 sudo[38441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tofshmpilefudfmqevuomnygapcfjngj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525007.2423084-350-11637117677613/AnsiballZ_stat.py'
Jan 27 14:43:27 compute-0 sudo[38441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:27 compute-0 python3.9[38443]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:43:27 compute-0 sudo[38441]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:28 compute-0 sudo[38564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atnlxvewodvgowtrrblwfzkmopevjclb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525007.2423084-350-11637117677613/AnsiballZ_copy.py'
Jan 27 14:43:28 compute-0 sudo[38564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:28 compute-0 python3.9[38566]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525007.2423084-350-11637117677613/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:43:28 compute-0 sudo[38564]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:29 compute-0 sudo[38716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtpoliakwxavzjgoscwttobkqhfubfya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525008.4937108-365-110827220588812/AnsiballZ_systemd.py'
Jan 27 14:43:29 compute-0 sudo[38716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:29 compute-0 python3.9[38718]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:43:29 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 27 14:43:29 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 27 14:43:29 compute-0 kernel: Bridge firewalling registered
Jan 27 14:43:29 compute-0 systemd-modules-load[38722]: Inserted module 'br_netfilter'
Jan 27 14:43:29 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 27 14:43:29 compute-0 sudo[38716]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:29 compute-0 sudo[38875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdyserosvvjitcrwcxlnwyvzjnygzcjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525009.6414602-373-27923863291506/AnsiballZ_stat.py'
Jan 27 14:43:29 compute-0 sudo[38875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:30 compute-0 python3.9[38877]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:43:30 compute-0 sudo[38875]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:30 compute-0 sudo[38998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udxpofyapuazrpzsarpwwvqvaeljhjvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525009.6414602-373-27923863291506/AnsiballZ_copy.py'
Jan 27 14:43:30 compute-0 sudo[38998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:30 compute-0 python3.9[39000]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525009.6414602-373-27923863291506/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:43:30 compute-0 sudo[38998]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:31 compute-0 sudo[39150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnweloaaamvazwdfdpbgxsophiikyjop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525011.094708-391-220678097511270/AnsiballZ_dnf.py'
Jan 27 14:43:31 compute-0 sudo[39150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:31 compute-0 python3.9[39152]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:43:37 compute-0 dbus-broker-launch[810]: Noticed file-system modification, trigger reload.
Jan 27 14:43:37 compute-0 dbus-broker-launch[810]: Noticed file-system modification, trigger reload.
Jan 27 14:43:37 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 14:43:37 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 14:43:37 compute-0 systemd[1]: Reloading.
Jan 27 14:43:37 compute-0 systemd-rc-local-generator[39217]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:43:37 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 14:43:38 compute-0 sudo[39150]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:39 compute-0 python3.9[40372]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:43:39 compute-0 python3.9[41330]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 27 14:43:40 compute-0 python3.9[42133]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:43:40 compute-0 sudo[42978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltcjlofxujiwqbzjytgrtwvytrevuiwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525020.6520596-430-83598428121856/AnsiballZ_command.py'
Jan 27 14:43:40 compute-0 sudo[42978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:41 compute-0 python3.9[42989]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:43:41 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 27 14:43:41 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 14:43:41 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 14:43:41 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.674s CPU time.
Jan 27 14:43:41 compute-0 systemd[1]: run-r5cdbc71a89ec4c15a02b04715d45087c.service: Deactivated successfully.
Jan 27 14:43:41 compute-0 systemd[1]: Starting Authorization Manager...
Jan 27 14:43:41 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 27 14:43:41 compute-0 polkitd[43538]: Started polkitd version 0.117
Jan 27 14:43:41 compute-0 polkitd[43538]: Loading rules from directory /etc/polkit-1/rules.d
Jan 27 14:43:41 compute-0 polkitd[43538]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 27 14:43:41 compute-0 polkitd[43538]: Finished loading, compiling and executing 2 rules
Jan 27 14:43:41 compute-0 systemd[1]: Started Authorization Manager.
Jan 27 14:43:41 compute-0 polkitd[43538]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 27 14:43:41 compute-0 sudo[42978]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:42 compute-0 sudo[43706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djoinzdxusbwgkwxdxxlubjdovgoqcuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525021.9769728-439-252148609319490/AnsiballZ_systemd.py'
Jan 27 14:43:42 compute-0 sudo[43706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:42 compute-0 python3.9[43708]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:43:42 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 27 14:43:42 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 27 14:43:42 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 27 14:43:42 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 27 14:43:42 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 27 14:43:42 compute-0 sudo[43706]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:43 compute-0 python3.9[43870]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 27 14:43:45 compute-0 sudo[44020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myraiycqqmwekazlkiwiozkyxtnaedxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525025.01038-496-13187653481233/AnsiballZ_systemd.py'
Jan 27 14:43:45 compute-0 sudo[44020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:45 compute-0 python3.9[44022]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:43:45 compute-0 systemd[1]: Reloading.
Jan 27 14:43:45 compute-0 systemd-rc-local-generator[44049]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:43:45 compute-0 sudo[44020]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:46 compute-0 sudo[44209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pssaxzcgegqnojgtlfaxgeduodjchjuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525026.0806732-496-11197077963456/AnsiballZ_systemd.py'
Jan 27 14:43:46 compute-0 sudo[44209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:46 compute-0 python3.9[44211]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:43:46 compute-0 systemd[1]: Reloading.
Jan 27 14:43:46 compute-0 systemd-rc-local-generator[44241]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:43:46 compute-0 sudo[44209]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:47 compute-0 sudo[44399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxjfdsplzcjbvqkmtlzsgxczyidjxgef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525027.3728178-512-153566772995911/AnsiballZ_command.py'
Jan 27 14:43:47 compute-0 sudo[44399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:47 compute-0 python3.9[44401]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:43:47 compute-0 sudo[44399]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:48 compute-0 sudo[44552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xemfooivseseoiljpsxsblcnjbsmoryd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525028.0070498-520-205677957569902/AnsiballZ_command.py'
Jan 27 14:43:48 compute-0 sudo[44552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:48 compute-0 python3.9[44554]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:43:48 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 27 14:43:48 compute-0 sudo[44552]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:49 compute-0 sudo[44705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqzvobzukezzmrlsnlhhvuhererphuqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525028.7541316-528-209613171440444/AnsiballZ_command.py'
Jan 27 14:43:49 compute-0 sudo[44705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:49 compute-0 python3.9[44707]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:43:50 compute-0 sudo[44705]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:51 compute-0 sudo[44867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckvnpifdmlahhlcrnzydxkcpplnapwkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525030.9612844-536-213619938828996/AnsiballZ_command.py'
Jan 27 14:43:51 compute-0 sudo[44867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:51 compute-0 python3.9[44869]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:43:51 compute-0 sudo[44867]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:51 compute-0 sudo[45020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vosprjriywdqbfxyxqbgzqbaifykgywv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525031.5876043-544-73429736419891/AnsiballZ_systemd.py'
Jan 27 14:43:51 compute-0 sudo[45020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:43:52 compute-0 python3.9[45022]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:43:52 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 27 14:43:52 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 27 14:43:52 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 27 14:43:52 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 27 14:43:52 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 27 14:43:52 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 27 14:43:52 compute-0 sudo[45020]: pam_unix(sudo:session): session closed for user root
Jan 27 14:43:52 compute-0 sshd-session[31446]: Connection closed by 192.168.122.30 port 58600
Jan 27 14:43:52 compute-0 sshd-session[31443]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:43:52 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 27 14:43:52 compute-0 systemd[1]: session-10.scope: Consumed 2min 16.853s CPU time.
Jan 27 14:43:52 compute-0 systemd-logind[820]: Session 10 logged out. Waiting for processes to exit.
Jan 27 14:43:52 compute-0 systemd-logind[820]: Removed session 10.
Jan 27 14:43:58 compute-0 sshd-session[45052]: Accepted publickey for zuul from 192.168.122.30 port 55114 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:43:58 compute-0 systemd-logind[820]: New session 11 of user zuul.
Jan 27 14:43:58 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 27 14:43:58 compute-0 sshd-session[45052]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:43:59 compute-0 python3.9[45205]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:44:01 compute-0 python3.9[45359]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:44:02 compute-0 sudo[45513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbpjpzfrrrtxcokzzmmfyaoobrbhemsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525041.6652992-45-10439571417965/AnsiballZ_command.py'
Jan 27 14:44:02 compute-0 sudo[45513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:02 compute-0 python3.9[45515]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:44:02 compute-0 sudo[45513]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:03 compute-0 python3.9[45666]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:44:04 compute-0 sudo[45820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkillgholqmkrgvpnnqrqomxjxfwlrlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525043.9305968-65-158150597294168/AnsiballZ_setup.py'
Jan 27 14:44:04 compute-0 sudo[45820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:04 compute-0 python3.9[45822]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 14:44:04 compute-0 sudo[45820]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:05 compute-0 sudo[45904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vraxexysqhatnnqyoknsvlvqmoifdizq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525043.9305968-65-158150597294168/AnsiballZ_dnf.py'
Jan 27 14:44:05 compute-0 sudo[45904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:05 compute-0 python3.9[45906]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:44:06 compute-0 sudo[45904]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:07 compute-0 sudo[46057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfvudduimigeowkyjxdpdtohrjpzxjkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525046.930597-77-66164714419721/AnsiballZ_setup.py'
Jan 27 14:44:07 compute-0 sudo[46057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:07 compute-0 python3.9[46059]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 14:44:07 compute-0 sudo[46057]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:08 compute-0 sudo[46228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvwlhppsogevutogcpkrvrplxtrejbkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525047.870763-88-255708875066237/AnsiballZ_file.py'
Jan 27 14:44:08 compute-0 sudo[46228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:08 compute-0 python3.9[46230]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:44:08 compute-0 sudo[46228]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:08 compute-0 sudo[46380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqxlmeqywzdtjfcptcfoitbmbctwtfnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525048.698722-96-162005814994452/AnsiballZ_command.py'
Jan 27 14:44:08 compute-0 sudo[46380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:09 compute-0 python3.9[46382]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:44:09 compute-0 podman[46383]: 2026-01-27 14:44:09.235987323 +0000 UTC m=+0.048596299 system refresh
Jan 27 14:44:09 compute-0 sudo[46380]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:09 compute-0 sudo[46544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfucwnauygitytxixiuookydlbbqbsir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525049.4337862-104-131320529714406/AnsiballZ_stat.py'
Jan 27 14:44:09 compute-0 sudo[46544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:10 compute-0 python3.9[46546]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:44:10 compute-0 sudo[46544]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:10 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:44:10 compute-0 sudo[46667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaaomiadtpzihazxmguwucogquiztfjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525049.4337862-104-131320529714406/AnsiballZ_copy.py'
Jan 27 14:44:10 compute-0 sudo[46667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:10 compute-0 python3.9[46669]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525049.4337862-104-131320529714406/.source.json follow=False _original_basename=podman_network_config.j2 checksum=ffc0a777a7a4c0818646ebfee396f169c197eacf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:44:11 compute-0 sudo[46667]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:11 compute-0 sudo[46819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoksjcfvofcfmrktrloqsynzloorxrwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525051.1845176-119-74749861588747/AnsiballZ_stat.py'
Jan 27 14:44:11 compute-0 sudo[46819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:11 compute-0 python3.9[46821]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:44:11 compute-0 sudo[46819]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:12 compute-0 sudo[46942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fehxypwpvwmpejhdggsrlnflkwwulzhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525051.1845176-119-74749861588747/AnsiballZ_copy.py'
Jan 27 14:44:12 compute-0 sudo[46942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:12 compute-0 python3.9[46944]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525051.1845176-119-74749861588747/.source.conf follow=False _original_basename=registries.conf.j2 checksum=65c075c2d8c66229f820dfec180ccef7e3484b36 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:44:12 compute-0 sudo[46942]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:12 compute-0 sudo[47094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uachrnhgjmvzpnilgpyiuaqbraaafnbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525052.5993311-135-16192955402410/AnsiballZ_ini_file.py'
Jan 27 14:44:12 compute-0 sudo[47094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:13 compute-0 python3.9[47096]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:44:13 compute-0 sudo[47094]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:13 compute-0 sudo[47246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maiiteghtpueeiwsgdewubfgvolcxxli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525053.3320322-135-72340496469038/AnsiballZ_ini_file.py'
Jan 27 14:44:13 compute-0 sudo[47246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:13 compute-0 python3.9[47248]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:44:13 compute-0 sudo[47246]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:14 compute-0 sudo[47398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqixutefghztvpicmaozclihsvlgnmfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525053.9590547-135-206597061002614/AnsiballZ_ini_file.py'
Jan 27 14:44:14 compute-0 sudo[47398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:14 compute-0 python3.9[47400]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:44:14 compute-0 sudo[47398]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:15 compute-0 sudo[47550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aokhobtwqwtemhylqmuqkxvxcczovicg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525054.759395-135-19587415865821/AnsiballZ_ini_file.py'
Jan 27 14:44:15 compute-0 sudo[47550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:15 compute-0 python3.9[47552]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:44:15 compute-0 sudo[47550]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:16 compute-0 python3.9[47702]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:44:16 compute-0 sudo[47854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycdleeoodnugozggjsuuztoicecupkrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525056.495841-175-279589984706370/AnsiballZ_dnf.py'
Jan 27 14:44:16 compute-0 sudo[47854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:17 compute-0 python3.9[47856]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 14:44:18 compute-0 sudo[47854]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:18 compute-0 sudo[48007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuthusinxnfqflnjkaywrujfxppmpstj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525058.4146788-183-97689568266638/AnsiballZ_dnf.py'
Jan 27 14:44:18 compute-0 sudo[48007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:18 compute-0 python3.9[48009]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 14:44:20 compute-0 sudo[48007]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:21 compute-0 sudo[48167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prptidiumdklichknjgwduqbdldawlzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525060.7383852-193-98001396837696/AnsiballZ_dnf.py'
Jan 27 14:44:21 compute-0 sudo[48167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:21 compute-0 python3.9[48169]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 14:44:22 compute-0 sudo[48167]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:22 compute-0 sudo[48320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knlpsscbavsuqvkfzztymcwukqhjyzar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525062.6485689-202-65940325312588/AnsiballZ_dnf.py'
Jan 27 14:44:22 compute-0 sudo[48320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:23 compute-0 python3.9[48322]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 14:44:24 compute-0 sudo[48320]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:24 compute-0 sudo[48473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuuveoaabtfdimekfvizngqzsczzkyyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525064.7302425-213-1711736134420/AnsiballZ_dnf.py'
Jan 27 14:44:24 compute-0 sudo[48473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:25 compute-0 python3.9[48475]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 14:44:26 compute-0 sudo[48473]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:27 compute-0 sudo[48629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsgbabshfnmfrjwgpfvayrwfarpsiyfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525067.1035566-221-69711595686545/AnsiballZ_dnf.py'
Jan 27 14:44:27 compute-0 sudo[48629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:27 compute-0 python3.9[48631]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 14:44:29 compute-0 sudo[48629]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:30 compute-0 sudo[48799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-augqbwjoifgvysmtfurelhqvfjkvbhce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525070.0667095-230-25572711636074/AnsiballZ_dnf.py'
Jan 27 14:44:30 compute-0 sudo[48799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:30 compute-0 python3.9[48801]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 14:44:31 compute-0 sudo[48799]: pam_unix(sudo:session): session closed for user root
Jan 27 14:44:32 compute-0 sudo[48952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbzabbpfpjhelssefpqvdapkjmjcahvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525071.970124-239-258092717090835/AnsiballZ_dnf.py'
Jan 27 14:44:32 compute-0 sudo[48952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:44:32 compute-0 python3.9[48954]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 14:45:07 compute-0 sudo[48952]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:08 compute-0 sudo[49288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbqyexkilmqhtnyerfzaqpbrwuvyvfwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525108.0523238-248-274756938058031/AnsiballZ_dnf.py'
Jan 27 14:45:08 compute-0 sudo[49288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:45:08 compute-0 python3.9[49290]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 14:45:09 compute-0 sudo[49288]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:10 compute-0 sudo[49444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoseixlwliczjjisdwitjshwofhgilqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525110.4371161-258-159743081110908/AnsiballZ_dnf.py'
Jan 27 14:45:10 compute-0 sudo[49444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:45:10 compute-0 python3.9[49446]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['device-mapper-multipath'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 14:45:12 compute-0 sudo[49444]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:13 compute-0 sudo[49601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uauyqaczunwgvwqmvyqzynivcynjrxrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525112.78995-269-123275656135012/AnsiballZ_file.py'
Jan 27 14:45:13 compute-0 sudo[49601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:45:13 compute-0 python3.9[49603]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:45:13 compute-0 sudo[49601]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:13 compute-0 sudo[49776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgcsjjzecoyfthvtvstbtlpnpiqmrlrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525113.5171733-277-17652482207604/AnsiballZ_stat.py'
Jan 27 14:45:13 compute-0 sudo[49776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:45:14 compute-0 python3.9[49778]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:45:14 compute-0 sudo[49776]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:14 compute-0 sudo[49899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujucxmpyxivhfkaugcmerykowhgnxzih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525113.5171733-277-17652482207604/AnsiballZ_copy.py'
Jan 27 14:45:14 compute-0 sudo[49899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:45:14 compute-0 python3.9[49901]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769525113.5171733-277-17652482207604/.source.json _original_basename=.7t54o6cz follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:45:14 compute-0 sudo[49899]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:15 compute-0 sudo[50051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvykaudpvhlbernxsbuznrhohqogdgjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525115.1066458-295-241605730659380/AnsiballZ_podman_image.py'
Jan 27 14:45:15 compute-0 sudo[50051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:45:15 compute-0 python3.9[50053]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 14:45:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:45:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3828969158-lower\x2dmapped.mount: Deactivated successfully.
Jan 27 14:45:21 compute-0 podman[50064]: 2026-01-27 14:45:21.384202719 +0000 UTC m=+5.484340353 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 27 14:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:45:21 compute-0 sudo[50051]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:22 compute-0 sudo[50358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyaljuirubrfhzcmwspuqfoqpwtcwhfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525121.8826754-306-57301003096851/AnsiballZ_podman_image.py'
Jan 27 14:45:22 compute-0 sudo[50358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:45:22 compute-0 python3.9[50360]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 14:45:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:45:31 compute-0 podman[50372]: 2026-01-27 14:45:31.208394727 +0000 UTC m=+8.792077882 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:45:31 compute-0 sudo[50358]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:31 compute-0 sudo[50663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjpdbhtepqzocrbkfxoxsethtrcvsudy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525131.701479-316-148395808548460/AnsiballZ_podman_image.py'
Jan 27 14:45:31 compute-0 sudo[50663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:45:32 compute-0 python3.9[50665]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 14:45:32 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:45:42 compute-0 podman[50677]: 2026-01-27 14:45:42.4881281 +0000 UTC m=+10.224878254 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 27 14:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:45:42 compute-0 sudo[50663]: pam_unix(sudo:session): session closed for user root
Jan 27 14:45:43 compute-0 sudo[50933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeonkoqdaalydmiwwclhfktuqieuinzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525143.157704-327-79747535890336/AnsiballZ_podman_image.py'
Jan 27 14:45:43 compute-0 sudo[50933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:45:43 compute-0 python3.9[50935]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 14:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:45:59 compute-0 podman[50948]: 2026-01-27 14:45:59.718932835 +0000 UTC m=+16.036396711 image pull 784fb2adc2a024f7e3dc24a0780ee88d1dda9d64127026d21a9dba69f9a258da quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 27 14:45:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:45:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:45:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:45:59 compute-0 sudo[50933]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:00 compute-0 sudo[51264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfwqudiujsukznzjcqziwttqoxortbax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525160.061594-327-48439635166873/AnsiballZ_podman_image.py'
Jan 27 14:46:00 compute-0 sudo[51264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:46:00 compute-0 python3.9[51266]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 14:46:02 compute-0 podman[51278]: 2026-01-27 14:46:02.229706538 +0000 UTC m=+1.528726624 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 27 14:46:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:46:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:46:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:46:02 compute-0 sudo[51264]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:03 compute-0 sudo[51550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueuhhlheikffutbnhbnfbsbpsgekfpmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525162.761059-343-240134444238924/AnsiballZ_podman_image.py'
Jan 27 14:46:03 compute-0 sudo[51550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:46:03 compute-0 python3.9[51552]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 14:46:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:46:07 compute-0 podman[51564]: 2026-01-27 14:46:07.141455639 +0000 UTC m=+3.800721361 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 27 14:46:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:46:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:46:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:46:07 compute-0 sudo[51550]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:07 compute-0 sudo[51827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewmqxttaqptbeebwebuoyvubbmqohtnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525167.4463582-343-14452652789751/AnsiballZ_podman_image.py'
Jan 27 14:46:07 compute-0 sudo[51827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:46:07 compute-0 python3.9[51829]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 14:46:15 compute-0 podman[51841]: 2026-01-27 14:46:15.109869243 +0000 UTC m=+7.147359258 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 27 14:46:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:46:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:46:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:46:15 compute-0 sudo[51827]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:15 compute-0 sshd-session[45055]: Connection closed by 192.168.122.30 port 55114
Jan 27 14:46:15 compute-0 sshd-session[45052]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:46:15 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 27 14:46:15 compute-0 systemd[1]: session-11.scope: Consumed 2min 22.177s CPU time.
Jan 27 14:46:15 compute-0 systemd-logind[820]: Session 11 logged out. Waiting for processes to exit.
Jan 27 14:46:15 compute-0 systemd-logind[820]: Removed session 11.
Jan 27 14:46:21 compute-0 sshd-session[52092]: Accepted publickey for zuul from 192.168.122.30 port 53358 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:46:21 compute-0 systemd-logind[820]: New session 12 of user zuul.
Jan 27 14:46:21 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 27 14:46:21 compute-0 sshd-session[52092]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:46:22 compute-0 python3.9[52245]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:46:23 compute-0 sudo[52399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxmqyduschcpnmplentuvoelrvgijhxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525183.3352766-31-41119339156165/AnsiballZ_getent.py'
Jan 27 14:46:23 compute-0 sudo[52399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:46:23 compute-0 python3.9[52401]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 27 14:46:23 compute-0 sudo[52399]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:24 compute-0 sudo[52552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cygdsrhzcoqdowuaivelpiwmogkbjiac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525184.0900476-39-45051363465517/AnsiballZ_group.py'
Jan 27 14:46:24 compute-0 sudo[52552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:46:24 compute-0 python3.9[52554]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 14:46:24 compute-0 groupadd[52555]: group added to /etc/group: name=openvswitch, GID=42476
Jan 27 14:46:24 compute-0 groupadd[52555]: group added to /etc/gshadow: name=openvswitch
Jan 27 14:46:24 compute-0 groupadd[52555]: new group: name=openvswitch, GID=42476
Jan 27 14:46:24 compute-0 sudo[52552]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:25 compute-0 sudo[52710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kflpxffcbcmpjddwauuldqceoduoypfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525184.884846-47-1465116177286/AnsiballZ_user.py'
Jan 27 14:46:25 compute-0 sudo[52710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:46:25 compute-0 python3.9[52712]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 14:46:25 compute-0 useradd[52714]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 27 14:46:25 compute-0 useradd[52714]: add 'openvswitch' to group 'hugetlbfs'
Jan 27 14:46:25 compute-0 useradd[52714]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 27 14:46:25 compute-0 sudo[52710]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:26 compute-0 sudo[52870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkiykazrcydojnzewpwvxnlomgnrjory ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525186.0960488-57-226747537138113/AnsiballZ_setup.py'
Jan 27 14:46:26 compute-0 sudo[52870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:46:26 compute-0 python3.9[52872]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 14:46:26 compute-0 sudo[52870]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:27 compute-0 sudo[52954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvrnmgwqcomienqbqtrofkjobbsnhsjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525186.0960488-57-226747537138113/AnsiballZ_dnf.py'
Jan 27 14:46:27 compute-0 sudo[52954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:46:27 compute-0 python3.9[52956]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 14:46:29 compute-0 sudo[52954]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:29 compute-0 sudo[53116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okzdgnzwnowtmifhhdcqhphaihpmfmdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525189.5311985-71-209007584306981/AnsiballZ_dnf.py'
Jan 27 14:46:29 compute-0 sudo[53116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:46:30 compute-0 python3.9[53118]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:46:45 compute-0 kernel: SELinux:  Converting 2738 SID table entries...
Jan 27 14:46:45 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 14:46:45 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 14:46:45 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 14:46:45 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 14:46:45 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 14:46:45 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 14:46:45 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 14:46:45 compute-0 groupadd[53141]: group added to /etc/group: name=unbound, GID=994
Jan 27 14:46:45 compute-0 groupadd[53141]: group added to /etc/gshadow: name=unbound
Jan 27 14:46:45 compute-0 groupadd[53141]: new group: name=unbound, GID=994
Jan 27 14:46:45 compute-0 useradd[53148]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 27 14:46:45 compute-0 dbus-broker-launch[811]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 27 14:46:45 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 27 14:46:47 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 14:46:47 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 14:46:47 compute-0 systemd[1]: Reloading.
Jan 27 14:46:47 compute-0 systemd-rc-local-generator[53646]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:46:47 compute-0 systemd-sysv-generator[53649]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:46:47 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 14:46:48 compute-0 sudo[53116]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:48 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 14:46:48 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 14:46:48 compute-0 systemd[1]: run-r9e84e0b21bdc40ea9902d58188ce1ccb.service: Deactivated successfully.
Jan 27 14:46:48 compute-0 sudo[54213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgmpsgfpcfwwnjsatrefzjfuynkiwlgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525208.27795-79-129657829811580/AnsiballZ_systemd.py'
Jan 27 14:46:48 compute-0 sudo[54213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:46:49 compute-0 python3.9[54215]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 14:46:49 compute-0 systemd[1]: Reloading.
Jan 27 14:46:49 compute-0 systemd-rc-local-generator[54246]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:46:49 compute-0 systemd-sysv-generator[54249]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:46:49 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 27 14:46:49 compute-0 chown[54257]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 27 14:46:49 compute-0 ovs-ctl[54262]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 27 14:46:49 compute-0 ovs-ctl[54262]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 27 14:46:49 compute-0 ovs-ctl[54262]: Starting ovsdb-server [  OK  ]
Jan 27 14:46:49 compute-0 ovs-vsctl[54311]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 27 14:46:49 compute-0 ovs-vsctl[54331]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"320c7d4f-8b68-4343-92ac-19c792fa938e\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 27 14:46:49 compute-0 ovs-ctl[54262]: Configuring Open vSwitch system IDs [  OK  ]
Jan 27 14:46:49 compute-0 ovs-ctl[54262]: Enabling remote OVSDB managers [  OK  ]
Jan 27 14:46:49 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 27 14:46:49 compute-0 ovs-vsctl[54337]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 27 14:46:49 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 27 14:46:49 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 27 14:46:49 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 27 14:46:49 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 27 14:46:49 compute-0 ovs-ctl[54381]: Inserting openvswitch module [  OK  ]
Jan 27 14:46:49 compute-0 ovs-ctl[54350]: Starting ovs-vswitchd [  OK  ]
Jan 27 14:46:49 compute-0 ovs-ctl[54350]: Enabling remote OVSDB managers [  OK  ]
Jan 27 14:46:49 compute-0 ovs-vsctl[54400]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 27 14:46:49 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 27 14:46:49 compute-0 systemd[1]: Starting Open vSwitch...
Jan 27 14:46:49 compute-0 systemd[1]: Finished Open vSwitch.
Jan 27 14:46:50 compute-0 sudo[54213]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:50 compute-0 python3.9[54551]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:46:51 compute-0 sudo[54701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nahffhbrqbijxweahngfbqdctaxcrhaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525210.9657173-97-81667320704946/AnsiballZ_sefcontext.py'
Jan 27 14:46:51 compute-0 sudo[54701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:46:51 compute-0 python3.9[54703]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 27 14:46:53 compute-0 kernel: SELinux:  Converting 2752 SID table entries...
Jan 27 14:46:53 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 14:46:53 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 14:46:53 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 14:46:53 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 14:46:53 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 14:46:53 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 14:46:53 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 14:46:53 compute-0 sudo[54701]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:54 compute-0 python3.9[54858]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:46:54 compute-0 sudo[55014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqlccpxwausubadpiluarxpvqkbdrool ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525214.5078378-115-170365143381976/AnsiballZ_dnf.py'
Jan 27 14:46:54 compute-0 dbus-broker-launch[811]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 27 14:46:54 compute-0 sudo[55014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:46:55 compute-0 python3.9[55016]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:46:56 compute-0 sudo[55014]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:56 compute-0 sudo[55167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekviwhstwybdguzdkgwgglrudlagicvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525216.3907962-123-183013388966540/AnsiballZ_command.py'
Jan 27 14:46:56 compute-0 sudo[55167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:46:57 compute-0 python3.9[55169]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:46:57 compute-0 sudo[55167]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:58 compute-0 sudo[55454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfgcddpsymahthsdntnszpotajxbyrob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525217.846671-131-213766686574445/AnsiballZ_file.py'
Jan 27 14:46:58 compute-0 sudo[55454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:46:58 compute-0 python3.9[55456]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 27 14:46:58 compute-0 sudo[55454]: pam_unix(sudo:session): session closed for user root
Jan 27 14:46:59 compute-0 python3.9[55606]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:46:59 compute-0 sudo[55758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibqbjtpjhcsdrsmchzhvyzgpzvdksdhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525219.4955359-147-269132751585308/AnsiballZ_dnf.py'
Jan 27 14:46:59 compute-0 sudo[55758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:46:59 compute-0 python3.9[55760]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:47:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 14:47:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 14:47:01 compute-0 systemd[1]: Reloading.
Jan 27 14:47:01 compute-0 systemd-rc-local-generator[55794]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:47:01 compute-0 systemd-sysv-generator[55800]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:47:01 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 14:47:01 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 14:47:01 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 14:47:01 compute-0 systemd[1]: run-r27c2d16ef6744d3280980cfa6d105233.service: Deactivated successfully.
Jan 27 14:47:02 compute-0 sudo[55758]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:02 compute-0 sudo[56075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqhpmyxoxbmjlmdpjlnxknmnixaervlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525222.179939-155-219298611015341/AnsiballZ_systemd.py'
Jan 27 14:47:02 compute-0 sudo[56075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:02 compute-0 python3.9[56077]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:47:02 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 27 14:47:02 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 27 14:47:02 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 27 14:47:02 compute-0 systemd[1]: Stopping Network Manager...
Jan 27 14:47:02 compute-0 NetworkManager[7214]: <info>  [1769525222.8147] caught SIGTERM, shutting down normally.
Jan 27 14:47:02 compute-0 NetworkManager[7214]: <info>  [1769525222.8160] dhcp4 (eth0): canceled DHCP transaction
Jan 27 14:47:02 compute-0 NetworkManager[7214]: <info>  [1769525222.8160] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 14:47:02 compute-0 NetworkManager[7214]: <info>  [1769525222.8160] dhcp4 (eth0): state changed no lease
Jan 27 14:47:02 compute-0 NetworkManager[7214]: <info>  [1769525222.8162] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 14:47:02 compute-0 NetworkManager[7214]: <info>  [1769525222.8216] exiting (success)
Jan 27 14:47:02 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 14:47:02 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 14:47:02 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 27 14:47:02 compute-0 systemd[1]: Stopped Network Manager.
Jan 27 14:47:02 compute-0 systemd[1]: NetworkManager.service: Consumed 22.297s CPU time, 4.1M memory peak, read 0B from disk, written 17.0K to disk.
Jan 27 14:47:02 compute-0 systemd[1]: Starting Network Manager...
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.8909] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:3ec64c28-9072-4af9-bb4c-439f11a25520)
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.8911] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.8963] manager[0x55ef2cdcb000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 27 14:47:02 compute-0 systemd[1]: Starting Hostname Service...
Jan 27 14:47:02 compute-0 systemd[1]: Started Hostname Service.
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9635] hostname: hostname: using hostnamed
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9637] hostname: static hostname changed from (none) to "compute-0"
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9640] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9644] manager[0x55ef2cdcb000]: rfkill: Wi-Fi hardware radio set enabled
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9644] manager[0x55ef2cdcb000]: rfkill: WWAN hardware radio set enabled
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9665] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9672] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9673] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9674] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9674] manager: Networking is enabled by state file
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9677] settings: Loaded settings plugin: keyfile (internal)
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9680] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9707] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9717] dhcp: init: Using DHCP client 'internal'
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9719] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9725] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9731] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9739] device (lo): Activation: starting connection 'lo' (6256a758-a13a-40c3-b045-d212ec55f25b)
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9745] device (eth0): carrier: link connected
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9750] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9755] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9756] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9764] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9771] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9776] device (eth1): carrier: link connected
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9780] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9786] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (48c2b12b-261d-5c47-9095-8385fdd77179) (indicated)
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9786] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9792] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9799] device (eth1): Activation: starting connection 'ci-private-network' (48c2b12b-261d-5c47-9095-8385fdd77179)
Jan 27 14:47:02 compute-0 systemd[1]: Started Network Manager.
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9807] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9820] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9823] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9825] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9828] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9831] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9834] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9837] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9844] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9850] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9855] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9865] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9877] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9884] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9885] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9891] device (lo): Activation: successful, device activated.
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9901] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9904] dhcp4 (eth0): state changed new lease, address=38.129.56.182
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9908] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9911] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9913] device (eth1): Activation: successful, device activated.
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9925] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 27 14:47:02 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 27 14:47:02 compute-0 NetworkManager[56090]: <info>  [1769525222.9990] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 14:47:03 compute-0 NetworkManager[56090]: <info>  [1769525223.0017] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 14:47:03 compute-0 NetworkManager[56090]: <info>  [1769525223.0018] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 14:47:03 compute-0 NetworkManager[56090]: <info>  [1769525223.0022] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 14:47:03 compute-0 NetworkManager[56090]: <info>  [1769525223.0028] device (eth0): Activation: successful, device activated.
Jan 27 14:47:03 compute-0 NetworkManager[56090]: <info>  [1769525223.0035] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 27 14:47:03 compute-0 NetworkManager[56090]: <info>  [1769525223.0042] manager: startup complete
Jan 27 14:47:03 compute-0 sudo[56075]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:03 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 27 14:47:03 compute-0 sudo[56303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrgdgmftlrchrfjmibedxrhiwfbgouvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525223.1936915-163-15626061459225/AnsiballZ_dnf.py'
Jan 27 14:47:03 compute-0 sudo[56303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:03 compute-0 python3.9[56305]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:47:10 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 14:47:10 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 14:47:10 compute-0 systemd[1]: Reloading.
Jan 27 14:47:10 compute-0 systemd-rc-local-generator[56359]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:47:10 compute-0 systemd-sysv-generator[56362]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:47:10 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 14:47:11 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 14:47:11 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 14:47:11 compute-0 systemd[1]: run-r7454cedb50e64cdeaf8bae3b1569d349.service: Deactivated successfully.
Jan 27 14:47:11 compute-0 sudo[56303]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:12 compute-0 sudo[56762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opduotocxvhoinowrrlwzpaqlrmypzzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525232.0416203-175-102612945394423/AnsiballZ_stat.py'
Jan 27 14:47:12 compute-0 sudo[56762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:12 compute-0 python3.9[56764]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:47:12 compute-0 sudo[56762]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:13 compute-0 sudo[56914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwfvielvtryjjeygkxzxbpkhpokfcpdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525232.6763482-184-141207385321350/AnsiballZ_ini_file.py'
Jan 27 14:47:13 compute-0 sudo[56914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:13 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 14:47:13 compute-0 python3.9[56916]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:47:13 compute-0 sudo[56914]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:13 compute-0 sudo[57068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjcccivwbrjzeudwtjulcfzyjamutuda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525233.5592954-194-154854670421162/AnsiballZ_ini_file.py'
Jan 27 14:47:13 compute-0 sudo[57068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:14 compute-0 python3.9[57070]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:47:14 compute-0 sudo[57068]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:14 compute-0 sudo[57220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edoskapqblbvqsnoearbiemynysorwhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525234.1740887-194-185976744911167/AnsiballZ_ini_file.py'
Jan 27 14:47:14 compute-0 sudo[57220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:14 compute-0 python3.9[57222]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:47:14 compute-0 sudo[57220]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:15 compute-0 sudo[57372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgwwpfazhxgwdptbltzgyrquekihfdbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525234.8211315-209-55054398293600/AnsiballZ_ini_file.py'
Jan 27 14:47:15 compute-0 sudo[57372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:15 compute-0 python3.9[57374]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:47:15 compute-0 sudo[57372]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:15 compute-0 sudo[57524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfypukmunqfggeivcdrhyinkdogywbvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525235.4940023-209-92975041859751/AnsiballZ_ini_file.py'
Jan 27 14:47:15 compute-0 sudo[57524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:15 compute-0 python3.9[57526]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:47:15 compute-0 sudo[57524]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:16 compute-0 sudo[57676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhpslbfzjjvdtmfzkpivubxfgusiiwym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525236.1575847-224-121377236364605/AnsiballZ_stat.py'
Jan 27 14:47:16 compute-0 sudo[57676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:16 compute-0 python3.9[57678]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:47:16 compute-0 sudo[57676]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:17 compute-0 sudo[57799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvkivtfyrngypvwnuudlkgjjuhoeutyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525236.1575847-224-121377236364605/AnsiballZ_copy.py'
Jan 27 14:47:17 compute-0 sudo[57799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:17 compute-0 python3.9[57801]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525236.1575847-224-121377236364605/.source _original_basename=.xfhjfdtl follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:47:17 compute-0 sudo[57799]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:17 compute-0 sudo[57951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpolykwbqkpmcahwohrbebvscjcfwuxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525237.4719143-239-260537386582243/AnsiballZ_file.py'
Jan 27 14:47:17 compute-0 sudo[57951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:17 compute-0 python3.9[57953]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:47:17 compute-0 sudo[57951]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:18 compute-0 sudo[58103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwfwbadoywhwoodxwkietwypqkcruwyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525238.0635264-247-4357548242312/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 27 14:47:18 compute-0 sudo[58103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:18 compute-0 python3.9[58105]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 27 14:47:18 compute-0 sudo[58103]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:19 compute-0 sudo[58255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agzdfdukigusvnypxmdmcwertqdlgynu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525238.8459907-256-231279323936949/AnsiballZ_file.py'
Jan 27 14:47:19 compute-0 sudo[58255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:19 compute-0 python3.9[58257]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:47:19 compute-0 sudo[58255]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:19 compute-0 sudo[58407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwiptljamzbyupybdtszdpfuspbucvaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525239.6355484-266-75762442570548/AnsiballZ_stat.py'
Jan 27 14:47:19 compute-0 sudo[58407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:20 compute-0 sudo[58407]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:20 compute-0 sudo[58530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onicihqcyzilxguvqfcdnowmxyovxiyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525239.6355484-266-75762442570548/AnsiballZ_copy.py'
Jan 27 14:47:20 compute-0 sudo[58530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:20 compute-0 sudo[58530]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:21 compute-0 sudo[58682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkqhjpobqopbgdzbtsnwhhpdljqwigid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525240.7970002-281-166609694797301/AnsiballZ_slurp.py'
Jan 27 14:47:21 compute-0 sudo[58682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:21 compute-0 python3.9[58684]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 27 14:47:21 compute-0 sudo[58682]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:22 compute-0 sudo[58857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcfctejxfbqpfiwbocrhgnvvbqupwkxi ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525241.537534-290-222797997833213/async_wrapper.py j16288934577 300 /home/zuul/.ansible/tmp/ansible-tmp-1769525241.537534-290-222797997833213/AnsiballZ_edpm_os_net_config.py _'
Jan 27 14:47:22 compute-0 sudo[58857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:22 compute-0 ansible-async_wrapper.py[58859]: Invoked with j16288934577 300 /home/zuul/.ansible/tmp/ansible-tmp-1769525241.537534-290-222797997833213/AnsiballZ_edpm_os_net_config.py _
Jan 27 14:47:22 compute-0 ansible-async_wrapper.py[58862]: Starting module and watcher
Jan 27 14:47:22 compute-0 ansible-async_wrapper.py[58862]: Start watching 58863 (300)
Jan 27 14:47:22 compute-0 ansible-async_wrapper.py[58863]: Start module (58863)
Jan 27 14:47:22 compute-0 ansible-async_wrapper.py[58859]: Return async_wrapper task started.
Jan 27 14:47:22 compute-0 sudo[58857]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:22 compute-0 python3.9[58864]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 27 14:47:23 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 27 14:47:23 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 27 14:47:23 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 27 14:47:23 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 27 14:47:23 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.1927] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.1952] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2467] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2468] audit: op="connection-add" uuid="c0dcd622-f886-4944-9c0f-100f080f25f0" name="br-ex-br" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2481] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2483] audit: op="connection-add" uuid="e4b36c7a-cc53-4c99-a456-c5a8336548f2" name="br-ex-port" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2494] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2495] audit: op="connection-add" uuid="d4533a72-9b28-4df5-85e3-e7711b787604" name="eth1-port" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2506] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2507] audit: op="connection-add" uuid="bbf9934d-b2d9-4ef9-9853-c9cdc1563b7d" name="vlan20-port" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2517] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2518] audit: op="connection-add" uuid="e91ee413-2cc0-45f2-8aab-814cc32e3049" name="vlan21-port" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2529] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2530] audit: op="connection-add" uuid="c6fe7505-20c8-4c35-acbd-e4800a52c71c" name="vlan22-port" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2549] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2563] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2565] audit: op="connection-add" uuid="fb0d9353-4564-49e2-a137-73dc207f5403" name="br-ex-if" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2607] audit: op="connection-update" uuid="48c2b12b-261d-5c47-9095-8385fdd77179" name="ci-private-network" args="ipv6.addr-gen-mode,ipv6.addresses,ipv6.routes,ipv6.dns,ipv6.method,ipv6.routing-rules,ipv4.addresses,ipv4.routes,ipv4.dns,ipv4.method,ipv4.never-default,ipv4.routing-rules,connection.slave-type,connection.master,connection.controller,connection.port-type,connection.timestamp,ovs-external-ids.data,ovs-interface.type" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2624] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2625] audit: op="connection-add" uuid="97e9e6f4-369b-4051-89f4-fc820bd14e34" name="vlan20-if" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2639] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2640] audit: op="connection-add" uuid="d74bb6cd-d5d9-4f5d-b146-0c5cb859def3" name="vlan21-if" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2656] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2658] audit: op="connection-add" uuid="d0bdb290-d554-4475-bd92-235e54dc5e2b" name="vlan22-if" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2669] audit: op="connection-delete" uuid="3b3adbdf-4ae1-3614-8d44-182832ec9532" name="Wired connection 1" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2679] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <warn>  [1769525244.2681] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2686] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2689] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (c0dcd622-f886-4944-9c0f-100f080f25f0)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2690] audit: op="connection-activate" uuid="c0dcd622-f886-4944-9c0f-100f080f25f0" name="br-ex-br" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2691] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <warn>  [1769525244.2692] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2695] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2699] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (e4b36c7a-cc53-4c99-a456-c5a8336548f2)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2700] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <warn>  [1769525244.2701] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2706] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2708] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (d4533a72-9b28-4df5-85e3-e7711b787604)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2710] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <warn>  [1769525244.2710] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2714] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2717] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (bbf9934d-b2d9-4ef9-9853-c9cdc1563b7d)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2718] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <warn>  [1769525244.2718] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2722] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2725] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (e91ee413-2cc0-45f2-8aab-814cc32e3049)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2726] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <warn>  [1769525244.2727] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2730] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2733] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (c6fe7505-20c8-4c35-acbd-e4800a52c71c)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2734] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2736] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2737] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2742] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <warn>  [1769525244.2743] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2745] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2748] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (fb0d9353-4564-49e2-a137-73dc207f5403)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2748] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2750] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2752] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2752] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2753] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2760] device (eth1): disconnecting for new activation request.
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2761] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2763] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2799] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2802] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2805] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <warn>  [1769525244.2807] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2811] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2816] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (97e9e6f4-369b-4051-89f4-fc820bd14e34)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2817] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2822] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2824] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2826] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2830] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <warn>  [1769525244.2832] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2836] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2841] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (d74bb6cd-d5d9-4f5d-b146-0c5cb859def3)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2842] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2846] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2848] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2850] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2854] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <warn>  [1769525244.2856] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2860] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2866] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (d0bdb290-d554-4475-bd92-235e54dc5e2b)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2867] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2871] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2873] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2875] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2877] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2891] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,802-3-ethernet.mtu" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2893] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2896] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2899] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2905] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2909] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2913] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2915] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2917] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2930] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2934] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2937] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2938] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 systemd-udevd[58871]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:47:24 compute-0 kernel: Timeout policy base is empty
Jan 27 14:47:24 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2961] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2967] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2971] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2973] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2979] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2984] dhcp4 (eth0): canceled DHCP transaction
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2985] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2985] dhcp4 (eth0): state changed no lease
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.2987] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3002] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3005] audit: op="device-reapply" interface="eth1" ifindex=3 pid=58865 uid=0 result="fail" reason="Device is not activated"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3014] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 27 14:47:24 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3062] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3066] dhcp4 (eth0): state changed new lease, address=38.129.56.182
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3072] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3109] device (eth1): disconnecting for new activation request.
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3110] audit: op="connection-activate" uuid="48c2b12b-261d-5c47-9095-8385fdd77179" name="ci-private-network" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3110] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3226] device (eth1): Activation: starting connection 'ci-private-network' (48c2b12b-261d-5c47-9095-8385fdd77179)
Jan 27 14:47:24 compute-0 kernel: br-ex: entered promiscuous mode
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3237] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3254] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3258] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3263] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3266] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3275] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3276] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3278] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3279] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3281] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3283] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58865 uid=0 result="success"
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3286] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3296] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3301] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3306] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3310] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3314] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3318] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3322] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3326] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3330] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3336] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3341] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3345] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3354] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3364] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 kernel: vlan22: entered promiscuous mode
Jan 27 14:47:24 compute-0 kernel: vlan21: entered promiscuous mode
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3422] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 systemd-udevd[58869]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3424] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3425] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3430] device (eth1): Activation: successful, device activated.
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3433] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3438] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 14:47:24 compute-0 kernel: vlan20: entered promiscuous mode
Jan 27 14:47:24 compute-0 systemd-udevd[58870]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:47:24 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3507] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3528] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3542] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3551] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3561] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3561] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3563] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3566] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3581] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3638] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3639] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3642] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3646] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3650] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 14:47:24 compute-0 NetworkManager[56090]: <info>  [1769525244.3654] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 14:47:25 compute-0 NetworkManager[56090]: <info>  [1769525245.4733] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58865 uid=0 result="success"
Jan 27 14:47:25 compute-0 NetworkManager[56090]: <info>  [1769525245.6709] checkpoint[0x55ef2cda1950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 27 14:47:25 compute-0 NetworkManager[56090]: <info>  [1769525245.6712] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58865 uid=0 result="success"
Jan 27 14:47:25 compute-0 NetworkManager[56090]: <info>  [1769525245.9470] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=58865 uid=0 result="success"
Jan 27 14:47:25 compute-0 NetworkManager[56090]: <info>  [1769525245.9479] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=58865 uid=0 result="success"
Jan 27 14:47:26 compute-0 sudo[59198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aovteqacefzlrlcxjbtlbhilybbaobof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525245.5764942-290-192358372053175/AnsiballZ_async_status.py'
Jan 27 14:47:26 compute-0 sudo[59198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:26 compute-0 NetworkManager[56090]: <info>  [1769525246.2432] audit: op="networking-control" arg="global-dns-configuration" pid=58865 uid=0 result="success"
Jan 27 14:47:26 compute-0 NetworkManager[56090]: <info>  [1769525246.2498] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 27 14:47:26 compute-0 NetworkManager[56090]: <info>  [1769525246.2766] audit: op="networking-control" arg="global-dns-configuration" pid=58865 uid=0 result="success"
Jan 27 14:47:26 compute-0 NetworkManager[56090]: <info>  [1769525246.2798] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=58865 uid=0 result="success"
Jan 27 14:47:26 compute-0 python3.9[59200]: ansible-ansible.legacy.async_status Invoked with jid=j16288934577.58859 mode=status _async_dir=/root/.ansible_async
Jan 27 14:47:26 compute-0 NetworkManager[56090]: <info>  [1769525246.4066] checkpoint[0x55ef2cda1a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 27 14:47:26 compute-0 NetworkManager[56090]: <info>  [1769525246.4069] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=58865 uid=0 result="success"
Jan 27 14:47:26 compute-0 sudo[59198]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:26 compute-0 ansible-async_wrapper.py[58863]: Module complete (58863)
Jan 27 14:47:27 compute-0 ansible-async_wrapper.py[58862]: Done in kid B.
Jan 27 14:47:29 compute-0 sudo[59302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkylcopmcqpvyozgxkaexhugdccjlduk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525245.5764942-290-192358372053175/AnsiballZ_async_status.py'
Jan 27 14:47:29 compute-0 sudo[59302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:29 compute-0 python3.9[59304]: ansible-ansible.legacy.async_status Invoked with jid=j16288934577.58859 mode=status _async_dir=/root/.ansible_async
Jan 27 14:47:29 compute-0 sudo[59302]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:30 compute-0 sudo[59402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcbrozdbyqlqzmnezzsxpeobqrlccfna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525245.5764942-290-192358372053175/AnsiballZ_async_status.py'
Jan 27 14:47:30 compute-0 sudo[59402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:30 compute-0 python3.9[59404]: ansible-ansible.legacy.async_status Invoked with jid=j16288934577.58859 mode=cleanup _async_dir=/root/.ansible_async
Jan 27 14:47:30 compute-0 sudo[59402]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:30 compute-0 sudo[59554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsoaewemheigrdomeymaioiyxzuicyit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525250.490457-317-265990123685984/AnsiballZ_stat.py'
Jan 27 14:47:30 compute-0 sudo[59554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:30 compute-0 python3.9[59556]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:47:30 compute-0 sudo[59554]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:31 compute-0 sudo[59677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxtwehbgupwktzbfpvlcexcuinrojpic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525250.490457-317-265990123685984/AnsiballZ_copy.py'
Jan 27 14:47:31 compute-0 sudo[59677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:31 compute-0 python3.9[59679]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525250.490457-317-265990123685984/.source.returncode _original_basename=.e8hfn017 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:47:31 compute-0 sudo[59677]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:32 compute-0 sudo[59829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygzpxsfhenpgvisiidlopcivmprcwmhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525251.7394307-333-206701761057989/AnsiballZ_stat.py'
Jan 27 14:47:32 compute-0 sudo[59829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:32 compute-0 python3.9[59831]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:47:32 compute-0 sudo[59829]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:32 compute-0 sudo[59952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otdoacyfuwwvytmoycncdyffxzwkcmux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525251.7394307-333-206701761057989/AnsiballZ_copy.py'
Jan 27 14:47:32 compute-0 sudo[59952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:32 compute-0 python3.9[59954]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525251.7394307-333-206701761057989/.source.cfg _original_basename=.v7ev5dur follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:47:32 compute-0 sudo[59952]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:32 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 14:47:33 compute-0 sudo[60107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oycwrazyowxtustlqvllhypbjaeqttku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525252.8702166-348-139288256521424/AnsiballZ_systemd.py'
Jan 27 14:47:33 compute-0 sudo[60107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:33 compute-0 python3.9[60109]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:47:33 compute-0 systemd[1]: Reloading Network Manager...
Jan 27 14:47:33 compute-0 NetworkManager[56090]: <info>  [1769525253.5558] audit: op="reload" arg="0" pid=60113 uid=0 result="success"
Jan 27 14:47:33 compute-0 NetworkManager[56090]: <info>  [1769525253.5564] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 27 14:47:33 compute-0 systemd[1]: Reloaded Network Manager.
Jan 27 14:47:33 compute-0 sudo[60107]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:33 compute-0 sshd-session[52095]: Connection closed by 192.168.122.30 port 53358
Jan 27 14:47:33 compute-0 sshd-session[52092]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:47:34 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 27 14:47:34 compute-0 systemd[1]: session-12.scope: Consumed 47.954s CPU time.
Jan 27 14:47:34 compute-0 systemd-logind[820]: Session 12 logged out. Waiting for processes to exit.
Jan 27 14:47:34 compute-0 systemd-logind[820]: Removed session 12.
Jan 27 14:47:39 compute-0 sshd-session[60144]: Accepted publickey for zuul from 192.168.122.30 port 54208 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:47:39 compute-0 systemd-logind[820]: New session 13 of user zuul.
Jan 27 14:47:39 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 27 14:47:39 compute-0 sshd-session[60144]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:47:40 compute-0 python3.9[60297]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:47:41 compute-0 python3.9[60451]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 14:47:42 compute-0 python3.9[60641]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:47:43 compute-0 sshd-session[60147]: Connection closed by 192.168.122.30 port 54208
Jan 27 14:47:43 compute-0 sshd-session[60144]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:47:43 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 27 14:47:43 compute-0 systemd[1]: session-13.scope: Consumed 2.134s CPU time.
Jan 27 14:47:43 compute-0 systemd-logind[820]: Session 13 logged out. Waiting for processes to exit.
Jan 27 14:47:43 compute-0 systemd-logind[820]: Removed session 13.
Jan 27 14:47:43 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 14:47:48 compute-0 sshd-session[60671]: Accepted publickey for zuul from 192.168.122.30 port 54588 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:47:48 compute-0 systemd-logind[820]: New session 14 of user zuul.
Jan 27 14:47:48 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 27 14:47:48 compute-0 sshd-session[60671]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:47:50 compute-0 python3.9[60824]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:47:50 compute-0 python3.9[60978]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:47:51 compute-0 sudo[61132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkhzyikpltculmoxpthpropqjifcfyzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525271.3359308-35-72323044072993/AnsiballZ_setup.py'
Jan 27 14:47:51 compute-0 sudo[61132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:51 compute-0 python3.9[61134]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 14:47:52 compute-0 sudo[61132]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:52 compute-0 sudo[61217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luoaalboevplplmnncrmoxmahzxzqgzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525271.3359308-35-72323044072993/AnsiballZ_dnf.py'
Jan 27 14:47:52 compute-0 sudo[61217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:52 compute-0 python3.9[61219]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:47:54 compute-0 sudo[61217]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:54 compute-0 sudo[61370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrlfbijnkjmxfnpvywvmmdzeiqatkoom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525274.216057-47-98670821091441/AnsiballZ_setup.py'
Jan 27 14:47:54 compute-0 sudo[61370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:54 compute-0 python3.9[61372]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 14:47:55 compute-0 sudo[61370]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:55 compute-0 sudo[61562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfouqqmmberdjhdzkbgvfevmjwlrmagw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525275.2967832-58-245324394025430/AnsiballZ_file.py'
Jan 27 14:47:55 compute-0 sudo[61562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:55 compute-0 python3.9[61564]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:47:55 compute-0 sudo[61562]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:56 compute-0 sudo[61714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgvhcjyojchwaskpkjslntioietykxeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525276.102275-66-159768566855664/AnsiballZ_command.py'
Jan 27 14:47:56 compute-0 sudo[61714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:56 compute-0 python3.9[61716]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:47:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:47:56 compute-0 sudo[61714]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:57 compute-0 sudo[61878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smlvqhefwgqutniowjontgiwbytpisdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525276.9825954-74-244545351030643/AnsiballZ_stat.py'
Jan 27 14:47:57 compute-0 sudo[61878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:57 compute-0 python3.9[61880]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:47:57 compute-0 sudo[61878]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:57 compute-0 sudo[61956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eentxapoivmoszedfoqjyxtgbsgqouai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525276.9825954-74-244545351030643/AnsiballZ_file.py'
Jan 27 14:47:57 compute-0 sudo[61956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:58 compute-0 python3.9[61958]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:47:58 compute-0 sudo[61956]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:58 compute-0 sudo[62108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dygbydalbbzeuwmcccerluygqylhleeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525278.3104405-86-163855410317728/AnsiballZ_stat.py'
Jan 27 14:47:58 compute-0 sudo[62108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:59 compute-0 python3.9[62110]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:47:59 compute-0 sudo[62108]: pam_unix(sudo:session): session closed for user root
Jan 27 14:47:59 compute-0 sudo[62186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atnhbwtqtwzmcuyiswtbygrypsyohano ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525278.3104405-86-163855410317728/AnsiballZ_file.py'
Jan 27 14:47:59 compute-0 sudo[62186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:47:59 compute-0 python3.9[62188]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:47:59 compute-0 sudo[62186]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:00 compute-0 sudo[62338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhmxukecalsyccrroxatysewukqtroob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525279.7145214-99-40184203396849/AnsiballZ_ini_file.py'
Jan 27 14:48:00 compute-0 sudo[62338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:00 compute-0 python3.9[62340]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:48:00 compute-0 sudo[62338]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:00 compute-0 sudo[62490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcjprihqvbkzltkdtptqxyuegfczypkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525280.492569-99-17317269472822/AnsiballZ_ini_file.py'
Jan 27 14:48:00 compute-0 sudo[62490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:01 compute-0 python3.9[62492]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:48:01 compute-0 sudo[62490]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:01 compute-0 sudo[62642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjyllbwikxgvzecmyocrnrgxezxsngyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525281.181525-99-104543916966096/AnsiballZ_ini_file.py'
Jan 27 14:48:01 compute-0 sudo[62642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:01 compute-0 python3.9[62644]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:48:01 compute-0 sudo[62642]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:02 compute-0 sudo[62794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkrciyerscvqwpxvwmthiyqhqkwqfquh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525281.7568398-99-241684092104115/AnsiballZ_ini_file.py'
Jan 27 14:48:02 compute-0 sudo[62794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:02 compute-0 python3.9[62796]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:48:02 compute-0 sudo[62794]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:02 compute-0 sudo[62946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glsjhdulougsrsjrtsiebnekjcwuucaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525282.4815323-130-243219839289526/AnsiballZ_dnf.py'
Jan 27 14:48:02 compute-0 sudo[62946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:02 compute-0 python3.9[62948]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:48:04 compute-0 sudo[62946]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:04 compute-0 sudo[63099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzkeabymvlpwebvxdxeoynkiligblzmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525284.6496067-141-90254068603434/AnsiballZ_setup.py'
Jan 27 14:48:04 compute-0 sudo[63099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:05 compute-0 python3.9[63101]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:48:05 compute-0 sudo[63099]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:05 compute-0 sudo[63253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckzxjiwfypixwqavonrwiqugbpbsealp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525285.3894176-149-250156487982720/AnsiballZ_stat.py'
Jan 27 14:48:05 compute-0 sudo[63253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:05 compute-0 python3.9[63255]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:48:05 compute-0 sudo[63253]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:06 compute-0 sudo[63405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpknrlrnouuweqvwbywckmfrqluuiuth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525286.073793-158-6703390444650/AnsiballZ_stat.py'
Jan 27 14:48:06 compute-0 sudo[63405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:06 compute-0 python3.9[63407]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:48:06 compute-0 sudo[63405]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:07 compute-0 sudo[63557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgsodtxgvjdmdzncgwedbsyfztauxsqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525286.819893-168-254734152443679/AnsiballZ_command.py'
Jan 27 14:48:07 compute-0 sudo[63557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:07 compute-0 python3.9[63559]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:48:07 compute-0 sudo[63557]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:07 compute-0 sudo[63710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thzayqjdmzbtrdnhprmbzfyysyzrwgah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525287.541388-178-58953987136039/AnsiballZ_service_facts.py'
Jan 27 14:48:07 compute-0 sudo[63710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:08 compute-0 python3.9[63712]: ansible-service_facts Invoked
Jan 27 14:48:08 compute-0 network[63729]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 14:48:08 compute-0 network[63730]: 'network-scripts' will be removed from distribution in near future.
Jan 27 14:48:08 compute-0 network[63731]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 14:48:11 compute-0 sudo[63710]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:12 compute-0 sudo[64014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sopnomhgkccqkrblmlegqltpqtqutsxy ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769525292.4333608-193-153569225828647/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769525292.4333608-193-153569225828647/args'
Jan 27 14:48:12 compute-0 sudo[64014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:12 compute-0 sudo[64014]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:13 compute-0 sudo[64181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdlyhgxymcwtwnjiikfrihglqmrsyiac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525293.2114015-204-89881084656567/AnsiballZ_dnf.py'
Jan 27 14:48:13 compute-0 sudo[64181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:13 compute-0 python3.9[64183]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:48:15 compute-0 sudo[64181]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:16 compute-0 sudo[64334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxhyequzfotvuajerslusnmybsubeuhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525295.3751276-217-85242046944398/AnsiballZ_package_facts.py'
Jan 27 14:48:16 compute-0 sudo[64334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:16 compute-0 python3.9[64336]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 27 14:48:16 compute-0 sudo[64334]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:17 compute-0 sudo[64486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nddikynrydnlyubhsvnkbfiaricueskj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525296.9276779-227-185483892871340/AnsiballZ_stat.py'
Jan 27 14:48:17 compute-0 sudo[64486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:17 compute-0 python3.9[64488]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:48:17 compute-0 sudo[64486]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:17 compute-0 sudo[64611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcvjueecxwzoaufeukdiykudsyslwdvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525296.9276779-227-185483892871340/AnsiballZ_copy.py'
Jan 27 14:48:17 compute-0 sudo[64611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:18 compute-0 python3.9[64613]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525296.9276779-227-185483892871340/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:48:18 compute-0 sudo[64611]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:18 compute-0 sudo[64765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibfybcxpggnlulkjyxwzedhzkgfawyfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525298.2883399-242-23356336498454/AnsiballZ_stat.py'
Jan 27 14:48:18 compute-0 sudo[64765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:18 compute-0 python3.9[64767]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:48:18 compute-0 sudo[64765]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:19 compute-0 sudo[64890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cizwjyefotywhvecnhthhknlbxnnaklw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525298.2883399-242-23356336498454/AnsiballZ_copy.py'
Jan 27 14:48:19 compute-0 sudo[64890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:19 compute-0 python3.9[64892]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525298.2883399-242-23356336498454/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:48:19 compute-0 sudo[64890]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:20 compute-0 sudo[65044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxnmtotuydlrbrqkoumdtmurftewxrgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525299.8952944-263-170898105066645/AnsiballZ_lineinfile.py'
Jan 27 14:48:20 compute-0 sudo[65044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:20 compute-0 python3.9[65046]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:48:20 compute-0 sudo[65044]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:21 compute-0 sudo[65198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrukonnsznnsisqejyvasvhuvlfpvtml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525301.3036647-278-30241995725356/AnsiballZ_setup.py'
Jan 27 14:48:21 compute-0 sudo[65198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:22 compute-0 python3.9[65200]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 14:48:22 compute-0 sudo[65198]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:23 compute-0 sudo[65283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yenmbbajarohnhhbbzfqxxvtqctftsum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525301.3036647-278-30241995725356/AnsiballZ_systemd.py'
Jan 27 14:48:23 compute-0 sudo[65283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:23 compute-0 python3.9[65285]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:48:23 compute-0 sudo[65283]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:24 compute-0 sudo[65437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teceydlmbzgelibkresyxfcfxsfndefu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525304.1408029-294-105522229003216/AnsiballZ_setup.py'
Jan 27 14:48:24 compute-0 sudo[65437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:24 compute-0 python3.9[65439]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 14:48:24 compute-0 sudo[65437]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:25 compute-0 sudo[65521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvbjqfukkoezbatvmfyqpxcjwjomhngk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525304.1408029-294-105522229003216/AnsiballZ_systemd.py'
Jan 27 14:48:25 compute-0 sudo[65521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:25 compute-0 python3.9[65523]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:48:25 compute-0 chronyd[830]: chronyd exiting
Jan 27 14:48:25 compute-0 systemd[1]: Stopping NTP client/server...
Jan 27 14:48:25 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 27 14:48:25 compute-0 systemd[1]: Stopped NTP client/server.
Jan 27 14:48:25 compute-0 systemd[1]: Starting NTP client/server...
Jan 27 14:48:25 compute-0 chronyd[65532]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 27 14:48:25 compute-0 chronyd[65532]: Frequency -26.857 +/- 0.251 ppm read from /var/lib/chrony/drift
Jan 27 14:48:25 compute-0 chronyd[65532]: Loaded seccomp filter (level 2)
Jan 27 14:48:25 compute-0 systemd[1]: Started NTP client/server.
Jan 27 14:48:25 compute-0 sudo[65521]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:26 compute-0 sshd-session[60674]: Connection closed by 192.168.122.30 port 54588
Jan 27 14:48:26 compute-0 sshd-session[60671]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:48:26 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 27 14:48:26 compute-0 systemd[1]: session-14.scope: Consumed 24.534s CPU time.
Jan 27 14:48:26 compute-0 systemd-logind[820]: Session 14 logged out. Waiting for processes to exit.
Jan 27 14:48:26 compute-0 systemd-logind[820]: Removed session 14.
Jan 27 14:48:32 compute-0 sshd-session[65558]: Accepted publickey for zuul from 192.168.122.30 port 45806 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:48:32 compute-0 systemd-logind[820]: New session 15 of user zuul.
Jan 27 14:48:32 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 27 14:48:32 compute-0 sshd-session[65558]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:48:33 compute-0 python3.9[65711]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:48:34 compute-0 sudo[65865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tieqqaqtkvxyvfsbjnwzhvgnxvzhubls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525313.691775-28-56965502740462/AnsiballZ_file.py'
Jan 27 14:48:34 compute-0 sudo[65865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:34 compute-0 python3.9[65867]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:48:34 compute-0 sudo[65865]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:34 compute-0 sudo[66040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgbvnafxsknpdagqeheqpwifewiywnfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525314.5470247-36-11438799546231/AnsiballZ_stat.py'
Jan 27 14:48:34 compute-0 sudo[66040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:35 compute-0 python3.9[66042]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:48:35 compute-0 sudo[66040]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:35 compute-0 sudo[66118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzjmqegnovjabxgrnuwrwyofnemtfhdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525314.5470247-36-11438799546231/AnsiballZ_file.py'
Jan 27 14:48:35 compute-0 sudo[66118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:35 compute-0 python3.9[66120]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.2a70eg8b recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:48:35 compute-0 sudo[66118]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:36 compute-0 sudo[66270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btgqwdqhzqiumwtqxxrlmxqhqcrjiwhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525315.905216-56-191751199406149/AnsiballZ_stat.py'
Jan 27 14:48:36 compute-0 sudo[66270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:36 compute-0 python3.9[66272]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:48:36 compute-0 sudo[66270]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:36 compute-0 sudo[66393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nniqzzhweygaenhsdghhtlqbsxwxzvju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525315.905216-56-191751199406149/AnsiballZ_copy.py'
Jan 27 14:48:36 compute-0 sudo[66393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:37 compute-0 python3.9[66395]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525315.905216-56-191751199406149/.source _original_basename=._zy7onaw follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:48:37 compute-0 sudo[66393]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:37 compute-0 sudo[66545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjbaajiqowxxwyfoocjnveukytnuhdex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525317.3308346-72-279162759722154/AnsiballZ_file.py'
Jan 27 14:48:37 compute-0 sudo[66545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:37 compute-0 python3.9[66547]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:48:37 compute-0 sudo[66545]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:38 compute-0 sudo[66697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glwbcuyoybmucjvdsyjugrjodvzapqtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525317.960505-80-159666634255692/AnsiballZ_stat.py'
Jan 27 14:48:38 compute-0 sudo[66697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:38 compute-0 python3.9[66699]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:48:38 compute-0 sudo[66697]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:38 compute-0 sudo[66820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwougfxykxukqdksrahznlchfrmwniwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525317.960505-80-159666634255692/AnsiballZ_copy.py'
Jan 27 14:48:38 compute-0 sudo[66820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:38 compute-0 python3.9[66822]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525317.960505-80-159666634255692/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:48:38 compute-0 sudo[66820]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:39 compute-0 sudo[66972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tppgnbbulgdfzlldjowaxdxobjdrauih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525319.0402455-80-239652539517031/AnsiballZ_stat.py'
Jan 27 14:48:39 compute-0 sudo[66972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:39 compute-0 python3.9[66974]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:48:39 compute-0 sudo[66972]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:39 compute-0 sudo[67095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihjeemvrcrzsoesyymropupldezmyrsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525319.0402455-80-239652539517031/AnsiballZ_copy.py'
Jan 27 14:48:39 compute-0 sudo[67095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:40 compute-0 python3.9[67097]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525319.0402455-80-239652539517031/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:48:40 compute-0 sudo[67095]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:40 compute-0 sudo[67247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmmerfsdjrgzmyqssgxkoifzbytbzkzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525320.2173636-109-201045890742354/AnsiballZ_file.py'
Jan 27 14:48:40 compute-0 sudo[67247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:40 compute-0 python3.9[67249]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:48:40 compute-0 sudo[67247]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:46 compute-0 sudo[67399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sminpaqxgechurutcfpmrdhnxgqdiikq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525320.857793-117-113362841048097/AnsiballZ_stat.py'
Jan 27 14:48:46 compute-0 sudo[67399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:46 compute-0 python3.9[67401]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:48:46 compute-0 sudo[67399]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:46 compute-0 sudo[67522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqhcwtyxtgtewfkalwhctgpytcxaunae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525320.857793-117-113362841048097/AnsiballZ_copy.py'
Jan 27 14:48:46 compute-0 sudo[67522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:47 compute-0 python3.9[67524]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525320.857793-117-113362841048097/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:48:47 compute-0 sudo[67522]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:47 compute-0 sudo[67674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orgpmudtaktsxuvsxnowtsdoesfutnsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525327.3107362-132-212228129784591/AnsiballZ_stat.py'
Jan 27 14:48:47 compute-0 sudo[67674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:47 compute-0 python3.9[67676]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:48:47 compute-0 sudo[67674]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:48 compute-0 sudo[67797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpuicnsmdxjutfhhcoyhsyxrsswarbce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525327.3107362-132-212228129784591/AnsiballZ_copy.py'
Jan 27 14:48:48 compute-0 sudo[67797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:48 compute-0 python3.9[67799]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525327.3107362-132-212228129784591/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:48:48 compute-0 sudo[67797]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:49 compute-0 sudo[67949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atacmajjwldwpsfoqsktuijcjkjycgqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525328.432792-147-6474836926715/AnsiballZ_systemd.py'
Jan 27 14:48:49 compute-0 sudo[67949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:49 compute-0 python3.9[67951]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:48:49 compute-0 systemd[1]: Reloading.
Jan 27 14:48:49 compute-0 systemd-rc-local-generator[67977]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:48:49 compute-0 systemd-sysv-generator[67982]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:48:49 compute-0 systemd[1]: Reloading.
Jan 27 14:48:49 compute-0 systemd-rc-local-generator[68016]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:48:49 compute-0 systemd-sysv-generator[68021]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:48:49 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 27 14:48:49 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 27 14:48:49 compute-0 sudo[67949]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:50 compute-0 sudo[68177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fskhueeolfhsdbkasvifzjpvpfswwyvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525329.9660559-155-184170942226422/AnsiballZ_stat.py'
Jan 27 14:48:50 compute-0 sudo[68177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:50 compute-0 python3.9[68179]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:48:50 compute-0 sudo[68177]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:50 compute-0 sudo[68300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmlfannpvlaspkzxrcgudcqzrdkvvffg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525329.9660559-155-184170942226422/AnsiballZ_copy.py'
Jan 27 14:48:50 compute-0 sudo[68300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:51 compute-0 python3.9[68302]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525329.9660559-155-184170942226422/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:48:51 compute-0 sudo[68300]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:51 compute-0 sudo[68452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srgmofdrytlieewgstfqwixqyxybdutz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525331.2194932-170-24766607517289/AnsiballZ_stat.py'
Jan 27 14:48:51 compute-0 sudo[68452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:51 compute-0 python3.9[68454]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:48:51 compute-0 sudo[68452]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:51 compute-0 sudo[68575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxkfunhbwacseqtbfpwdcqzhbrcxktck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525331.2194932-170-24766607517289/AnsiballZ_copy.py'
Jan 27 14:48:51 compute-0 sudo[68575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:52 compute-0 python3.9[68577]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525331.2194932-170-24766607517289/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:48:52 compute-0 sudo[68575]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:52 compute-0 sudo[68727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyvvcvjpzulqjxbcrekldqggmxtnjxzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525332.4460056-185-243597950956929/AnsiballZ_systemd.py'
Jan 27 14:48:52 compute-0 sudo[68727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:53 compute-0 python3.9[68729]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:48:53 compute-0 systemd[1]: Reloading.
Jan 27 14:48:53 compute-0 systemd-rc-local-generator[68758]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:48:53 compute-0 systemd-sysv-generator[68761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:48:53 compute-0 systemd[1]: Reloading.
Jan 27 14:48:53 compute-0 systemd-rc-local-generator[68795]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:48:53 compute-0 systemd-sysv-generator[68799]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:48:53 compute-0 systemd[1]: Starting Create netns directory...
Jan 27 14:48:53 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 14:48:53 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 14:48:53 compute-0 systemd[1]: Finished Create netns directory.
Jan 27 14:48:53 compute-0 sudo[68727]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:54 compute-0 python3.9[68956]: ansible-ansible.builtin.service_facts Invoked
Jan 27 14:48:54 compute-0 network[68973]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 14:48:54 compute-0 network[68974]: 'network-scripts' will be removed from distribution in near future.
Jan 27 14:48:54 compute-0 network[68975]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 14:48:57 compute-0 sudo[69235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihurvznlewjqyokvwqeuqwgnycgsmzam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525336.8200831-201-518544605225/AnsiballZ_systemd.py'
Jan 27 14:48:57 compute-0 sudo[69235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:57 compute-0 python3.9[69237]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:48:57 compute-0 systemd[1]: Reloading.
Jan 27 14:48:57 compute-0 systemd-sysv-generator[69270]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:48:57 compute-0 systemd-rc-local-generator[69266]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:48:57 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 27 14:48:58 compute-0 iptables.init[69277]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 27 14:48:58 compute-0 iptables.init[69277]: iptables: Flushing firewall rules: [  OK  ]
Jan 27 14:48:58 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 27 14:48:58 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 27 14:48:58 compute-0 sudo[69235]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:58 compute-0 sudo[69472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjaosyprwrlofpuyaiyisejuxgaxffer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525338.251514-201-69813447968612/AnsiballZ_systemd.py'
Jan 27 14:48:58 compute-0 sudo[69472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:58 compute-0 python3.9[69474]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:48:58 compute-0 sudo[69472]: pam_unix(sudo:session): session closed for user root
Jan 27 14:48:59 compute-0 sudo[69626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnffpdgxgyghnfknlrwkowaslcflixhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525339.1008563-217-114433727759099/AnsiballZ_systemd.py'
Jan 27 14:48:59 compute-0 sudo[69626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:48:59 compute-0 python3.9[69628]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:48:59 compute-0 systemd[1]: Reloading.
Jan 27 14:48:59 compute-0 systemd-rc-local-generator[69656]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:48:59 compute-0 systemd-sysv-generator[69659]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:48:59 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 27 14:48:59 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 27 14:48:59 compute-0 sudo[69626]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:00 compute-0 sudo[69818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhapcpezrxubtrrphoesmbrjqzsvbxnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525340.116208-225-254093404888646/AnsiballZ_command.py'
Jan 27 14:49:00 compute-0 sudo[69818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:00 compute-0 python3.9[69820]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:49:00 compute-0 sudo[69818]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:01 compute-0 sudo[69971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fccaaddqojjdofsmdelxwheftlbqtfbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525341.374683-239-3773913366386/AnsiballZ_stat.py'
Jan 27 14:49:01 compute-0 sudo[69971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:01 compute-0 python3.9[69973]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:49:01 compute-0 sudo[69971]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:02 compute-0 sudo[70096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbfahstlnzfhtafxssptadlrqvgccmgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525341.374683-239-3773913366386/AnsiballZ_copy.py'
Jan 27 14:49:02 compute-0 sudo[70096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:02 compute-0 python3.9[70098]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525341.374683-239-3773913366386/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:02 compute-0 sudo[70096]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:02 compute-0 sudo[70249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnhjcwvqugamkgsqpsjcebpgjwjhedyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525342.6170495-254-220565210407283/AnsiballZ_systemd.py'
Jan 27 14:49:02 compute-0 sudo[70249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:03 compute-0 python3.9[70251]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:49:03 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 27 14:49:03 compute-0 sshd[1007]: Received SIGHUP; restarting.
Jan 27 14:49:03 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 27 14:49:03 compute-0 sshd[1007]: Server listening on 0.0.0.0 port 22.
Jan 27 14:49:03 compute-0 sshd[1007]: Server listening on :: port 22.
Jan 27 14:49:03 compute-0 sudo[70249]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:03 compute-0 sudo[70405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itcselulelhuoqdvhvyhvvlmsynzutrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525343.389629-262-45660117545363/AnsiballZ_file.py'
Jan 27 14:49:03 compute-0 sudo[70405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:03 compute-0 python3.9[70407]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:03 compute-0 sudo[70405]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:04 compute-0 sudo[70557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfisfmxqybjyxxskizfxrliplofeehnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525344.0286055-270-22298921924821/AnsiballZ_stat.py'
Jan 27 14:49:04 compute-0 sudo[70557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:04 compute-0 python3.9[70559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:49:04 compute-0 sudo[70557]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:05 compute-0 sudo[70680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrremgsozqghilmhqetrpmteqymtzgkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525344.0286055-270-22298921924821/AnsiballZ_copy.py'
Jan 27 14:49:05 compute-0 sudo[70680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:05 compute-0 python3.9[70682]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525344.0286055-270-22298921924821/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:05 compute-0 sudo[70680]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:05 compute-0 sudo[70832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rplgbryfpjzzhakwfzfchsarujheeovo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525345.5363598-288-153051574442905/AnsiballZ_timezone.py'
Jan 27 14:49:06 compute-0 sudo[70832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:06 compute-0 python3.9[70834]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 27 14:49:06 compute-0 systemd[1]: Starting Time & Date Service...
Jan 27 14:49:06 compute-0 systemd[1]: Started Time & Date Service.
Jan 27 14:49:06 compute-0 sudo[70832]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:06 compute-0 sudo[70988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqakjexoxcazdaazeebjmhsvyipznkac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525346.5435758-297-248172185169037/AnsiballZ_file.py'
Jan 27 14:49:06 compute-0 sudo[70988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:06 compute-0 python3.9[70990]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:07 compute-0 sudo[70988]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:07 compute-0 sudo[71140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prfuknxnriyzezgohgtuajmrrlvkbjjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525347.174302-305-242590379965139/AnsiballZ_stat.py'
Jan 27 14:49:07 compute-0 sudo[71140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:07 compute-0 python3.9[71142]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:49:07 compute-0 sudo[71140]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:08 compute-0 sudo[71263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqwsuamzvmjlnuvvpchdardrcfavhanx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525347.174302-305-242590379965139/AnsiballZ_copy.py'
Jan 27 14:49:08 compute-0 sudo[71263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:08 compute-0 python3.9[71265]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525347.174302-305-242590379965139/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:08 compute-0 sudo[71263]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:08 compute-0 sudo[71415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywathjgunnhoifjdpdgvtlzfrbxjpgqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525348.48592-320-36010622567566/AnsiballZ_stat.py'
Jan 27 14:49:08 compute-0 sudo[71415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:09 compute-0 python3.9[71417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:49:09 compute-0 sudo[71415]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:09 compute-0 sudo[71538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grznugdpczlmzbdoxojebenvecsfkgzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525348.48592-320-36010622567566/AnsiballZ_copy.py'
Jan 27 14:49:09 compute-0 sudo[71538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:09 compute-0 python3.9[71540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525348.48592-320-36010622567566/.source.yaml _original_basename=.ngsuxtv_ follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:09 compute-0 sudo[71538]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:10 compute-0 sudo[71690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viboxlkhiigmivlvninmeogewqlsmafu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525349.858573-335-218945176857616/AnsiballZ_stat.py'
Jan 27 14:49:10 compute-0 sudo[71690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:10 compute-0 python3.9[71692]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:49:10 compute-0 sudo[71690]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:10 compute-0 sudo[71813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkiabgpjiaiqjeiwqpqtlrbmgkoytgag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525349.858573-335-218945176857616/AnsiballZ_copy.py'
Jan 27 14:49:10 compute-0 sudo[71813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:10 compute-0 python3.9[71815]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525349.858573-335-218945176857616/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:11 compute-0 sudo[71813]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:11 compute-0 sudo[71965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bipawvcgwojzvjmycjouswoizffchkaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525351.1765766-350-127886341516842/AnsiballZ_command.py'
Jan 27 14:49:11 compute-0 sudo[71965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:11 compute-0 python3.9[71967]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:49:11 compute-0 sudo[71965]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:12 compute-0 sudo[72118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azuusysutvuyutjdsqvgstgubrdebzso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525351.8150852-358-257698263275546/AnsiballZ_command.py'
Jan 27 14:49:12 compute-0 sudo[72118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:12 compute-0 python3.9[72120]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:49:12 compute-0 sudo[72118]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:12 compute-0 sudo[72271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvlhgdwffsfrnvqxiolyteujretjirxx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769525352.4518433-366-162319225521005/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 14:49:12 compute-0 sudo[72271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:13 compute-0 python3[72273]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 14:49:13 compute-0 sudo[72271]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:13 compute-0 sudo[72423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eydrzvnfpvttlvkfyezxoikeazmttmug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525353.3451917-374-60808449155107/AnsiballZ_stat.py'
Jan 27 14:49:13 compute-0 sudo[72423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:13 compute-0 python3.9[72425]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:49:13 compute-0 sudo[72423]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:14 compute-0 sudo[72546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psieejlloqgdvfhyqcalzoleldywejzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525353.3451917-374-60808449155107/AnsiballZ_copy.py'
Jan 27 14:49:14 compute-0 sudo[72546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:14 compute-0 python3.9[72548]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525353.3451917-374-60808449155107/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:14 compute-0 sudo[72546]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:14 compute-0 sudo[72698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csurhcntbxpygpalfetmxseazdtuhpfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525354.4897332-389-103817875484178/AnsiballZ_stat.py'
Jan 27 14:49:14 compute-0 sudo[72698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:14 compute-0 python3.9[72700]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:49:14 compute-0 sudo[72698]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:15 compute-0 sudo[72821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niaaygnzhebypytssaecofzxlumglepd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525354.4897332-389-103817875484178/AnsiballZ_copy.py'
Jan 27 14:49:15 compute-0 sudo[72821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:15 compute-0 python3.9[72823]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525354.4897332-389-103817875484178/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:15 compute-0 sudo[72821]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:16 compute-0 sudo[72973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daiavhgssuwozotairxymtqrgkfoeweo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525355.7444215-404-85718387698351/AnsiballZ_stat.py'
Jan 27 14:49:16 compute-0 sudo[72973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:16 compute-0 python3.9[72975]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:49:16 compute-0 sudo[72973]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:16 compute-0 sudo[73096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojcnwyinzpfhfacezcotyfyejaftvkhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525355.7444215-404-85718387698351/AnsiballZ_copy.py'
Jan 27 14:49:16 compute-0 sudo[73096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:16 compute-0 python3.9[73098]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525355.7444215-404-85718387698351/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:16 compute-0 sudo[73096]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:17 compute-0 sudo[73248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ennxxebpadoqwppabssmwbcqytlfibjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525356.9132023-419-12744334714985/AnsiballZ_stat.py'
Jan 27 14:49:17 compute-0 sudo[73248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:17 compute-0 python3.9[73250]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:49:17 compute-0 sudo[73248]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:17 compute-0 sudo[73371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlzarbkglkqjwwcmlizbnxpvfjpyhite ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525356.9132023-419-12744334714985/AnsiballZ_copy.py'
Jan 27 14:49:17 compute-0 sudo[73371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:17 compute-0 python3.9[73373]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525356.9132023-419-12744334714985/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:17 compute-0 sudo[73371]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:18 compute-0 sudo[73523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fybxybmsrxijnawklbvwsxzzvqfhchll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525358.092994-434-250960489226645/AnsiballZ_stat.py'
Jan 27 14:49:18 compute-0 sudo[73523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:18 compute-0 python3.9[73525]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:49:18 compute-0 sudo[73523]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:18 compute-0 sudo[73646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcyxbxihqvfwkxrkkmqxoitvmekhbuem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525358.092994-434-250960489226645/AnsiballZ_copy.py'
Jan 27 14:49:18 compute-0 sudo[73646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:19 compute-0 python3.9[73648]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525358.092994-434-250960489226645/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:19 compute-0 sudo[73646]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:19 compute-0 sudo[73798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omkyxlvrspjvvwbposdhroocolevxcxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525359.3343737-449-139756328511188/AnsiballZ_file.py'
Jan 27 14:49:19 compute-0 sudo[73798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:19 compute-0 python3.9[73800]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:19 compute-0 sudo[73798]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:20 compute-0 sudo[73950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkitvtyotrvddcxzegbzqejjvalxazjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525360.0071442-457-261736411992358/AnsiballZ_command.py'
Jan 27 14:49:20 compute-0 sudo[73950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:20 compute-0 python3.9[73952]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:49:20 compute-0 sudo[73950]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:21 compute-0 sudo[74109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qslzoqjqfycknkcpiisgecvsjaptinpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525360.7082865-465-248437394049190/AnsiballZ_blockinfile.py'
Jan 27 14:49:21 compute-0 sudo[74109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:21 compute-0 python3.9[74111]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:21 compute-0 sudo[74109]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:22 compute-0 sudo[74262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weiypreizchxfditmeycpfcrxwprgptp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525361.7815912-474-216937646126501/AnsiballZ_file.py'
Jan 27 14:49:22 compute-0 sudo[74262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:22 compute-0 python3.9[74264]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:22 compute-0 sudo[74262]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:22 compute-0 sudo[74414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkxpsyqnvsfayptsxgsmdhhkljmhfsoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525362.3655486-474-82829100501608/AnsiballZ_file.py'
Jan 27 14:49:22 compute-0 sudo[74414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:22 compute-0 python3.9[74416]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:22 compute-0 sudo[74414]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:23 compute-0 sudo[74566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hekggrmdftmpnjdlceidjnerawdrzngv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525363.0078304-489-20438590079898/AnsiballZ_mount.py'
Jan 27 14:49:23 compute-0 sudo[74566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:23 compute-0 python3.9[74568]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 27 14:49:23 compute-0 sudo[74566]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:23 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:49:23 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:49:24 compute-0 sudo[74720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtmmddvlcdgarmzmmyjkaxhuevlbsatl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525364.034935-489-25664153312050/AnsiballZ_mount.py'
Jan 27 14:49:24 compute-0 sudo[74720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:24 compute-0 python3.9[74722]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 27 14:49:24 compute-0 sudo[74720]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:24 compute-0 sshd-session[65561]: Connection closed by 192.168.122.30 port 45806
Jan 27 14:49:24 compute-0 sshd-session[65558]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:49:24 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 27 14:49:24 compute-0 systemd[1]: session-15.scope: Consumed 33.556s CPU time.
Jan 27 14:49:24 compute-0 systemd-logind[820]: Session 15 logged out. Waiting for processes to exit.
Jan 27 14:49:24 compute-0 systemd-logind[820]: Removed session 15.
Jan 27 14:49:30 compute-0 sshd-session[74748]: Accepted publickey for zuul from 192.168.122.30 port 47398 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:49:30 compute-0 systemd-logind[820]: New session 16 of user zuul.
Jan 27 14:49:30 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 27 14:49:30 compute-0 sshd-session[74748]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:49:31 compute-0 sudo[74901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkycjomriykerqlkkdchvkemdiexfmvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525370.714349-16-281051503732586/AnsiballZ_tempfile.py'
Jan 27 14:49:31 compute-0 sudo[74901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:31 compute-0 python3.9[74903]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 27 14:49:31 compute-0 sudo[74901]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:32 compute-0 sudo[75053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydwtwpfyvhmegphzfuzjpttgcjlerdpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525371.5731466-28-230966368252128/AnsiballZ_stat.py'
Jan 27 14:49:32 compute-0 sudo[75053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:32 compute-0 python3.9[75055]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:49:32 compute-0 sudo[75053]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:33 compute-0 sudo[75205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enhyhmbpggogupnmvhqycngywormzfzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525372.4810119-38-89092800934163/AnsiballZ_setup.py'
Jan 27 14:49:33 compute-0 sudo[75205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:33 compute-0 python3.9[75207]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:49:33 compute-0 sudo[75205]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:34 compute-0 sudo[75357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pffgjmjutsyeucugyicdwmclkfqhwubc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525373.7681098-47-265361607137091/AnsiballZ_blockinfile.py'
Jan 27 14:49:34 compute-0 sudo[75357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:34 compute-0 python3.9[75359]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD13N1IvX/U6aKBMDFm+C18DEhn2sSsDv//LnwvwIoI3IzxcN6Sx3+KfALoRJEqVcGXlg3T7zqoHLkqODGn846pxKutv4ttvPGlz2dHGe5FViYNF7LROau7qWausZLtCj0ZmC/h0JxJ6QIo0XHWN3TIgTU0iYCeRdkmKjk5YNN3U5avhxOAhLKclU79eKahE+Bnh7NELuSnasK4FUYq0MmsYYj/4gJAFdVqdhmbgm1uMZZdGly+VAo8qVU5pN4zUeR7Awx5vjEudspIaKIfdVK5r5jILYdAj9Pv6TT6GJ0n6A54zW/8r1eO/8/E+De+XGMDwrkI27GVULfYh5kcVWrDCeKwGDewXyKrvRMSYqQUWgfaMD/aTLXmUaoJ7fEBD33MfZdxgrouQY/Zc8qjy1SNuz/mWJO21LP58xOVCcjp5f4xwCmTCAYNjPr2h108qhzC1YwOZyo/C5G4rKI9CkCuA+O0y3OlFaYHK1l6CcogtVV4X7UPyx6bcPuJmF4+V+k=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPLlZ3qTAAsShhiQim4KcE84++G7JTxHUJbtT3kKY+rc
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF5f0s9ynRpO7KAyH15YEJAeTX48eeyb/YLBzLritarrw5VZMQITEvzytbIspMFHl7gB3ciGgSPZ/hxpFfJ+XHc=
                                             create=True mode=0644 path=/tmp/ansible.yj3zjj1c state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:34 compute-0 sudo[75357]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:35 compute-0 sudo[75509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afzvqaolsxncktpafgrfixxpsdqzeivl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525374.918348-55-60260250116019/AnsiballZ_command.py'
Jan 27 14:49:35 compute-0 sudo[75509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:35 compute-0 python3.9[75511]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.yj3zjj1c' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:49:35 compute-0 sudo[75509]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:36 compute-0 sudo[75663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djqcahuzdaazvpqfmrzqevvmqfqrzpst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525375.6912951-63-120217582007732/AnsiballZ_file.py'
Jan 27 14:49:36 compute-0 sudo[75663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:36 compute-0 python3.9[75665]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.yj3zjj1c state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:36 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 27 14:49:36 compute-0 sudo[75663]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:36 compute-0 sshd-session[74751]: Connection closed by 192.168.122.30 port 47398
Jan 27 14:49:36 compute-0 sshd-session[74748]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:49:36 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 27 14:49:36 compute-0 systemd[1]: session-16.scope: Consumed 3.589s CPU time.
Jan 27 14:49:36 compute-0 systemd-logind[820]: Session 16 logged out. Waiting for processes to exit.
Jan 27 14:49:36 compute-0 systemd-logind[820]: Removed session 16.
Jan 27 14:49:42 compute-0 sshd-session[75692]: Accepted publickey for zuul from 192.168.122.30 port 43114 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:49:42 compute-0 systemd-logind[820]: New session 17 of user zuul.
Jan 27 14:49:42 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 27 14:49:42 compute-0 sshd-session[75692]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:49:43 compute-0 python3.9[75845]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:49:44 compute-0 sudo[75999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohttoaddcrsqavvwvzbvrbsejkrlfrly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525384.2638266-27-105565205382972/AnsiballZ_systemd.py'
Jan 27 14:49:44 compute-0 sudo[75999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:45 compute-0 python3.9[76001]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 27 14:49:45 compute-0 sudo[75999]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:45 compute-0 sudo[76153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejtobozgsjxxqlpolnsxvqnhzeupapvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525385.4303255-35-261807425080060/AnsiballZ_systemd.py'
Jan 27 14:49:45 compute-0 sudo[76153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:46 compute-0 python3.9[76155]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:49:46 compute-0 sudo[76153]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:46 compute-0 sudo[76306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfvqpurnbqhridfskrtxuexyorwffoqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525386.2003028-44-148680087337606/AnsiballZ_command.py'
Jan 27 14:49:46 compute-0 sudo[76306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:46 compute-0 python3.9[76308]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:49:46 compute-0 sudo[76306]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:47 compute-0 sudo[76459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwumdhhklelnitzozhdvbkbetlhiubgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525386.9621353-52-234625704253562/AnsiballZ_stat.py'
Jan 27 14:49:47 compute-0 sudo[76459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:47 compute-0 python3.9[76461]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:49:47 compute-0 sudo[76459]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:47 compute-0 sudo[76613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnppondvcjmmnxnptfbuwdicyuydimqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525387.7558029-60-8670165081616/AnsiballZ_command.py'
Jan 27 14:49:47 compute-0 sudo[76613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:48 compute-0 python3.9[76615]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:49:48 compute-0 sudo[76613]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:48 compute-0 sudo[76768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvlmdnfzcyqnrtyjbfhcsthywudituzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525388.3615878-68-78787208631485/AnsiballZ_file.py'
Jan 27 14:49:48 compute-0 sudo[76768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:48 compute-0 python3.9[76770]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:49:49 compute-0 sudo[76768]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:49 compute-0 sshd-session[75695]: Connection closed by 192.168.122.30 port 43114
Jan 27 14:49:49 compute-0 sshd-session[75692]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:49:49 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 27 14:49:49 compute-0 systemd[1]: session-17.scope: Consumed 4.062s CPU time.
Jan 27 14:49:49 compute-0 systemd-logind[820]: Session 17 logged out. Waiting for processes to exit.
Jan 27 14:49:49 compute-0 systemd-logind[820]: Removed session 17.
Jan 27 14:49:55 compute-0 sshd-session[76795]: Accepted publickey for zuul from 192.168.122.30 port 58152 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:49:55 compute-0 systemd-logind[820]: New session 18 of user zuul.
Jan 27 14:49:55 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 27 14:49:55 compute-0 sshd-session[76795]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:49:56 compute-0 python3.9[76948]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:49:56 compute-0 sudo[77102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brlxqxwyajgdwvoucvneahclffkrwzov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525396.5120695-29-149028122979135/AnsiballZ_setup.py'
Jan 27 14:49:56 compute-0 sudo[77102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:57 compute-0 python3.9[77104]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 14:49:57 compute-0 sudo[77102]: pam_unix(sudo:session): session closed for user root
Jan 27 14:49:57 compute-0 sudo[77186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utottlzeoobmagfxoxsepknojquqmbsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525396.5120695-29-149028122979135/AnsiballZ_dnf.py'
Jan 27 14:49:57 compute-0 sudo[77186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:49:57 compute-0 python3.9[77188]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 14:49:59 compute-0 sudo[77186]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:00 compute-0 python3.9[77339]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:50:01 compute-0 python3.9[77490]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 14:50:02 compute-0 python3.9[77640]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:50:02 compute-0 python3.9[77790]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:50:03 compute-0 sshd-session[76798]: Connection closed by 192.168.122.30 port 58152
Jan 27 14:50:03 compute-0 sshd-session[76795]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:50:03 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 27 14:50:03 compute-0 systemd[1]: session-18.scope: Consumed 5.550s CPU time.
Jan 27 14:50:03 compute-0 systemd-logind[820]: Session 18 logged out. Waiting for processes to exit.
Jan 27 14:50:03 compute-0 systemd-logind[820]: Removed session 18.
Jan 27 14:50:09 compute-0 sshd-session[77815]: Accepted publickey for zuul from 192.168.122.30 port 37606 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:50:09 compute-0 systemd-logind[820]: New session 19 of user zuul.
Jan 27 14:50:09 compute-0 systemd[1]: Started Session 19 of User zuul.
Jan 27 14:50:09 compute-0 sshd-session[77815]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:50:10 compute-0 python3.9[77968]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:50:12 compute-0 sudo[78122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuqvoehumayasgtzoarkbssafsqmcres ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525411.6121087-45-184902153319812/AnsiballZ_file.py'
Jan 27 14:50:12 compute-0 sudo[78122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:12 compute-0 python3.9[78124]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:12 compute-0 sudo[78122]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:12 compute-0 sudo[78274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btlxivcacgamujqfeadewquwdwonrdfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525412.3794765-45-137763846912798/AnsiballZ_file.py'
Jan 27 14:50:12 compute-0 sudo[78274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:12 compute-0 python3.9[78276]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:12 compute-0 sudo[78274]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:13 compute-0 sudo[78426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqevkjsfzqmubtewwwbwvkbniegedsma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525413.0392733-60-182638397255297/AnsiballZ_stat.py'
Jan 27 14:50:13 compute-0 sudo[78426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:13 compute-0 python3.9[78428]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:13 compute-0 sudo[78426]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:14 compute-0 sudo[78549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmngybsvgghknljfvwdkcixlmwzdamvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525413.0392733-60-182638397255297/AnsiballZ_copy.py'
Jan 27 14:50:14 compute-0 sudo[78549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:14 compute-0 python3.9[78551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525413.0392733-60-182638397255297/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=17f8d8c093b992e9ec3b1ad1fa93e6220735154e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:14 compute-0 sudo[78549]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:14 compute-0 sudo[78701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkyhuvmydgzdqmslzoufmshpubwzlmkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525414.5544753-60-232192892302572/AnsiballZ_stat.py'
Jan 27 14:50:14 compute-0 sudo[78701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:14 compute-0 python3.9[78703]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:14 compute-0 sudo[78701]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:15 compute-0 sudo[78824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzopgddmlepinxybeuedxauourouiofo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525414.5544753-60-232192892302572/AnsiballZ_copy.py'
Jan 27 14:50:15 compute-0 sudo[78824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:15 compute-0 python3.9[78826]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525414.5544753-60-232192892302572/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a2afb4b94e36851e52fc2ef1fd215275ac0b8cca backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:15 compute-0 sudo[78824]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:15 compute-0 sudo[78976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pulkeldcpjmmqdsudatseeujhwfzrvno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525415.5894313-60-223062382808111/AnsiballZ_stat.py'
Jan 27 14:50:15 compute-0 sudo[78976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:16 compute-0 python3.9[78978]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:16 compute-0 sudo[78976]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:16 compute-0 sudo[79099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laixjndctsaguyiqbfzabxbcfimavcmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525415.5894313-60-223062382808111/AnsiballZ_copy.py'
Jan 27 14:50:16 compute-0 sudo[79099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:16 compute-0 python3.9[79101]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525415.5894313-60-223062382808111/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=dd2b264c0a56193abe8d61b2ab72b7bc0c0ab18e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:16 compute-0 sudo[79099]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:17 compute-0 sudo[79251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnxiykcyktbqistuaxsnrsjzfeayocen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525416.8540595-104-123454494956858/AnsiballZ_file.py'
Jan 27 14:50:17 compute-0 sudo[79251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:17 compute-0 python3.9[79253]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:17 compute-0 sudo[79251]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:17 compute-0 sudo[79403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cirxydjduyanlsvxxqlhvvjfnzbbtucr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525417.50641-104-47992295766172/AnsiballZ_file.py'
Jan 27 14:50:17 compute-0 sudo[79403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:17 compute-0 python3.9[79405]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:17 compute-0 sudo[79403]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:18 compute-0 sudo[79555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqwgmaznzeewcsmrjxulabbjxoyjsggh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525418.1483188-119-129914201559477/AnsiballZ_stat.py'
Jan 27 14:50:18 compute-0 sudo[79555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:18 compute-0 python3.9[79557]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:18 compute-0 sudo[79555]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:18 compute-0 sudo[79678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecsxumvftiyhouvxwibwfqqrukolqhzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525418.1483188-119-129914201559477/AnsiballZ_copy.py'
Jan 27 14:50:18 compute-0 sudo[79678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:19 compute-0 python3.9[79680]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525418.1483188-119-129914201559477/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=78b1d281b6109e35cbc8ef646fad4b8501294602 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:19 compute-0 sudo[79678]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:19 compute-0 sudo[79830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvvvmxauxtnqotyveyaziuzcarpgtetn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525419.254328-119-26962703435437/AnsiballZ_stat.py'
Jan 27 14:50:19 compute-0 sudo[79830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:19 compute-0 python3.9[79832]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:19 compute-0 sudo[79830]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:20 compute-0 sudo[79953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsaaslqdgaqfdvytpfmlgnrbmapspxxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525419.254328-119-26962703435437/AnsiballZ_copy.py'
Jan 27 14:50:20 compute-0 sudo[79953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:20 compute-0 python3.9[79955]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525419.254328-119-26962703435437/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a2afb4b94e36851e52fc2ef1fd215275ac0b8cca backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:20 compute-0 sudo[79953]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:20 compute-0 sudo[80105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaycnpqxjqvybepbjfzpqvdpmviypjcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525420.433892-119-55076647455738/AnsiballZ_stat.py'
Jan 27 14:50:20 compute-0 sudo[80105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:20 compute-0 python3.9[80107]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:20 compute-0 sudo[80105]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:21 compute-0 sudo[80228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqmovroyeqifhnkaljgzceevplnkivri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525420.433892-119-55076647455738/AnsiballZ_copy.py'
Jan 27 14:50:21 compute-0 sudo[80228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:21 compute-0 python3.9[80230]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525420.433892-119-55076647455738/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=8cf8396b0a89666104bb63819f80c678767d6c54 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:21 compute-0 sudo[80228]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:21 compute-0 sudo[80380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmomsqoplavyqhdaarlhgbdxocxbshvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525421.7047977-163-228652980162398/AnsiballZ_file.py'
Jan 27 14:50:22 compute-0 sudo[80380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:22 compute-0 python3.9[80382]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:22 compute-0 sudo[80380]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:22 compute-0 sudo[80532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnxvzaaadvoeauiuiqwordvpqzuoamxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525422.3330064-163-50770728754951/AnsiballZ_file.py'
Jan 27 14:50:22 compute-0 sudo[80532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:22 compute-0 python3.9[80534]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:22 compute-0 sudo[80532]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:23 compute-0 sudo[80684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfhzdfnvgqnhrxkeichpcqkkcqbhtzel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525422.9907-178-151232570467180/AnsiballZ_stat.py'
Jan 27 14:50:23 compute-0 sudo[80684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:23 compute-0 python3.9[80686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:23 compute-0 sudo[80684]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:23 compute-0 sudo[80807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swvfamfajontkyorexsskjlhqwxxnjds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525422.9907-178-151232570467180/AnsiballZ_copy.py'
Jan 27 14:50:23 compute-0 sudo[80807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:23 compute-0 python3.9[80809]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525422.9907-178-151232570467180/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=9d27f10cbdb3b13cf33fca4a9efe5c0b7252d557 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:23 compute-0 sudo[80807]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:24 compute-0 sudo[80959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqcqihqzbaudrjgaiiruapolswtxpeui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525424.1295884-178-105911752763208/AnsiballZ_stat.py'
Jan 27 14:50:24 compute-0 sudo[80959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:24 compute-0 python3.9[80961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:24 compute-0 sudo[80959]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:24 compute-0 sudo[81082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvdhvednympacnesvdrfbxawgaofbixd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525424.1295884-178-105911752763208/AnsiballZ_copy.py'
Jan 27 14:50:24 compute-0 sudo[81082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:25 compute-0 python3.9[81084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525424.1295884-178-105911752763208/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=252bfcab109b304eed239435415e71fc4d352691 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:25 compute-0 sudo[81082]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:25 compute-0 sudo[81234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvexijkzwrlhamnadkythixquypaaeux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525425.283358-178-235192661894491/AnsiballZ_stat.py'
Jan 27 14:50:25 compute-0 sudo[81234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:25 compute-0 python3.9[81236]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:25 compute-0 sudo[81234]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:26 compute-0 sudo[81357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwntuwazwflptonltntsuigmuluidvru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525425.283358-178-235192661894491/AnsiballZ_copy.py'
Jan 27 14:50:26 compute-0 sudo[81357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:26 compute-0 python3.9[81359]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525425.283358-178-235192661894491/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=487fb981ac4245e7422b4dfbefe7d6bf49425392 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:26 compute-0 sudo[81357]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:26 compute-0 sudo[81509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tehbxamwobtscawruzrirjiimodszfbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525426.4866707-222-60680792382816/AnsiballZ_file.py'
Jan 27 14:50:26 compute-0 sudo[81509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:26 compute-0 python3.9[81511]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:26 compute-0 sudo[81509]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:27 compute-0 sudo[81661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcfjeagghtxxrkgypzfnouhozvqdyzgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525427.0574212-222-56856107845124/AnsiballZ_file.py'
Jan 27 14:50:27 compute-0 sudo[81661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:27 compute-0 python3.9[81663]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:27 compute-0 sudo[81661]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:28 compute-0 sudo[81813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qixdtuwprywzgrcmkdrmytysonfelfld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525427.7361112-237-136628307989875/AnsiballZ_stat.py'
Jan 27 14:50:28 compute-0 sudo[81813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:28 compute-0 python3.9[81815]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:28 compute-0 sudo[81813]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:28 compute-0 sudo[81936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzcqbfidvrvhtmqsipenyqfwrlhiseqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525427.7361112-237-136628307989875/AnsiballZ_copy.py'
Jan 27 14:50:28 compute-0 sudo[81936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:28 compute-0 python3.9[81938]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525427.7361112-237-136628307989875/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d04f939d6dee3592cbe612e2caa4a22a55a92b31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:28 compute-0 sudo[81936]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:29 compute-0 sudo[82088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qngcvqmuiacosakqtjnyrndsxkrzzfzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525428.8670068-237-132304595644880/AnsiballZ_stat.py'
Jan 27 14:50:29 compute-0 sudo[82088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:29 compute-0 python3.9[82090]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:29 compute-0 sudo[82088]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:29 compute-0 sudo[82211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdycxapimvgzpbncplzscaneteampert ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525428.8670068-237-132304595644880/AnsiballZ_copy.py'
Jan 27 14:50:29 compute-0 sudo[82211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:29 compute-0 python3.9[82213]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525428.8670068-237-132304595644880/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ccf775e78a9889f1eee149d6adcac9d2e9e8a34b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:29 compute-0 sudo[82211]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:30 compute-0 sudo[82363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkthdihkzdhphnbieakjimmxkpalmhco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525430.012228-237-137591065449984/AnsiballZ_stat.py'
Jan 27 14:50:30 compute-0 sudo[82363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:30 compute-0 python3.9[82365]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:30 compute-0 sudo[82363]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:30 compute-0 sudo[82486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujelgbtptumvpihvnloniodghrruxaep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525430.012228-237-137591065449984/AnsiballZ_copy.py'
Jan 27 14:50:30 compute-0 sudo[82486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:31 compute-0 python3.9[82488]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525430.012228-237-137591065449984/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=4598414eac946e2a8ce5bd2905c6205284762bb9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:31 compute-0 sudo[82486]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:31 compute-0 sudo[82638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqevhbgplawphzxllvminmvjirfeglga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525431.2407916-281-212534872122628/AnsiballZ_file.py'
Jan 27 14:50:31 compute-0 sudo[82638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:31 compute-0 python3.9[82640]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:31 compute-0 sudo[82638]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:32 compute-0 sudo[82790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmowzavibsbqobkhqsrpmivxkftuuiep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525431.942382-281-156291322412650/AnsiballZ_file.py'
Jan 27 14:50:32 compute-0 sudo[82790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:32 compute-0 python3.9[82792]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:32 compute-0 sudo[82790]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:32 compute-0 sudo[82942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcvfejsmvwmmmohaeyggzubrqdeovnat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525432.648608-296-278404724750469/AnsiballZ_stat.py'
Jan 27 14:50:32 compute-0 sudo[82942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:33 compute-0 python3.9[82944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:33 compute-0 sudo[82942]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:33 compute-0 sudo[83065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chwviycuzsdeifqjptsrvmgiiugsoxog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525432.648608-296-278404724750469/AnsiballZ_copy.py'
Jan 27 14:50:33 compute-0 sudo[83065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:33 compute-0 python3.9[83067]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525432.648608-296-278404724750469/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=29b87be336452da84cbfff9087959f0737d32696 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:33 compute-0 sudo[83065]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:34 compute-0 sudo[83217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaouejpnjncjpzshlravjhohsfmnpnyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525433.9930289-296-153226405206837/AnsiballZ_stat.py'
Jan 27 14:50:34 compute-0 sudo[83217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:34 compute-0 python3.9[83219]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:34 compute-0 sudo[83217]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:34 compute-0 sudo[83340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnykpuqhhsogtvkkonvxpdnsctrlzuuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525433.9930289-296-153226405206837/AnsiballZ_copy.py'
Jan 27 14:50:34 compute-0 sudo[83340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:35 compute-0 python3.9[83342]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525433.9930289-296-153226405206837/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=252bfcab109b304eed239435415e71fc4d352691 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:35 compute-0 sudo[83340]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:35 compute-0 chronyd[65532]: Selected source 216.232.132.102 (pool.ntp.org)
Jan 27 14:50:35 compute-0 sudo[83492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fichyehydlbnlcmnmyoqliqoovcagioy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525435.161626-296-62090479319342/AnsiballZ_stat.py'
Jan 27 14:50:35 compute-0 sudo[83492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:35 compute-0 python3.9[83494]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:35 compute-0 sudo[83492]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:35 compute-0 sudo[83615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aywnlhgcculcbxisvpauylomgvcdyuup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525435.161626-296-62090479319342/AnsiballZ_copy.py'
Jan 27 14:50:35 compute-0 sudo[83615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:36 compute-0 python3.9[83617]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525435.161626-296-62090479319342/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0afb6c756234d0cc8fa27e2143753d3760c0bb9e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:36 compute-0 sudo[83615]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:37 compute-0 sudo[83767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhzccmmwtltsoznloinmqrfzupznojow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525436.9176638-356-107026765449775/AnsiballZ_file.py'
Jan 27 14:50:37 compute-0 sudo[83767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:37 compute-0 python3.9[83769]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:37 compute-0 sudo[83767]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:37 compute-0 sudo[83919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjbldszuzaikqvevvruzsatshbkifwak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525437.601334-364-42561059058104/AnsiballZ_stat.py'
Jan 27 14:50:37 compute-0 sudo[83919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:38 compute-0 python3.9[83921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:38 compute-0 sudo[83919]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:38 compute-0 sudo[84042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzrnogukvixyrnzvlfvefaupbabcgkaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525437.601334-364-42561059058104/AnsiballZ_copy.py'
Jan 27 14:50:38 compute-0 sudo[84042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:38 compute-0 python3.9[84044]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525437.601334-364-42561059058104/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2f887b8856f7683bf37464f08df3e925386e9ebd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:38 compute-0 sudo[84042]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:39 compute-0 sudo[84194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgkozyjqcbbeanudgsipyrgflpblhvea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525439.0728185-380-35476188104772/AnsiballZ_file.py'
Jan 27 14:50:39 compute-0 sudo[84194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:39 compute-0 python3.9[84196]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:39 compute-0 sudo[84194]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:40 compute-0 sudo[84346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoulnfdbscpldqdsgmcdeebrntldjeqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525439.7705576-388-3995300628780/AnsiballZ_stat.py'
Jan 27 14:50:40 compute-0 sudo[84346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:40 compute-0 python3.9[84348]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:40 compute-0 sudo[84346]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:40 compute-0 sudo[84469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsitiutvjgbgwtzqrvfzxgpnoenhxwdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525439.7705576-388-3995300628780/AnsiballZ_copy.py'
Jan 27 14:50:40 compute-0 sudo[84469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:40 compute-0 python3.9[84471]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525439.7705576-388-3995300628780/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2f887b8856f7683bf37464f08df3e925386e9ebd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:40 compute-0 sudo[84469]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:41 compute-0 sudo[84621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufvzueukdxsplkjjpbkvmumdvchtqbaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525440.9897642-404-199246350309447/AnsiballZ_file.py'
Jan 27 14:50:41 compute-0 sudo[84621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:41 compute-0 python3.9[84623]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:41 compute-0 sudo[84621]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:41 compute-0 sudo[84773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfjdqdcfkuslkvqmkpaxknjkqzsddonw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525441.690823-412-143108976507746/AnsiballZ_stat.py'
Jan 27 14:50:41 compute-0 sudo[84773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:42 compute-0 python3.9[84775]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:42 compute-0 sudo[84773]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:42 compute-0 sudo[84896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xegmrnfsiccoilzdzbumhcegiqzyazzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525441.690823-412-143108976507746/AnsiballZ_copy.py'
Jan 27 14:50:42 compute-0 sudo[84896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:42 compute-0 python3.9[84898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525441.690823-412-143108976507746/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2f887b8856f7683bf37464f08df3e925386e9ebd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:42 compute-0 sudo[84896]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:43 compute-0 sudo[85048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nteefiketypsnrxmvxjjxvfrtxftijet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525442.9199817-428-64881365788739/AnsiballZ_file.py'
Jan 27 14:50:43 compute-0 sudo[85048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:43 compute-0 python3.9[85050]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:43 compute-0 sudo[85048]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:43 compute-0 sudo[85200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hojiprffmbfmrqrxcbebkewjqfbzuxku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525443.5782473-436-67562231270613/AnsiballZ_stat.py'
Jan 27 14:50:43 compute-0 sudo[85200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:44 compute-0 python3.9[85202]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:44 compute-0 sudo[85200]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:44 compute-0 sudo[85323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmfmsqafmaynrszmtjcowvdjywtxciwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525443.5782473-436-67562231270613/AnsiballZ_copy.py'
Jan 27 14:50:44 compute-0 sudo[85323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:44 compute-0 python3.9[85325]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525443.5782473-436-67562231270613/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2f887b8856f7683bf37464f08df3e925386e9ebd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:44 compute-0 sudo[85323]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:45 compute-0 sudo[85475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hasgqmkgglmjzzooclotptnsisgtqzgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525444.7708354-452-5560384918094/AnsiballZ_file.py'
Jan 27 14:50:45 compute-0 sudo[85475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:45 compute-0 python3.9[85477]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:45 compute-0 sudo[85475]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:45 compute-0 sudo[85627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrbauxtpluhlololmispakrfuqoacgzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525445.3820467-460-172508940179159/AnsiballZ_stat.py'
Jan 27 14:50:45 compute-0 sudo[85627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:45 compute-0 python3.9[85629]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:45 compute-0 sudo[85627]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:46 compute-0 sudo[85750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxmiseuioxvkixpfnjpyfaltoanlozxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525445.3820467-460-172508940179159/AnsiballZ_copy.py'
Jan 27 14:50:46 compute-0 sudo[85750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:46 compute-0 python3.9[85752]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525445.3820467-460-172508940179159/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2f887b8856f7683bf37464f08df3e925386e9ebd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:46 compute-0 sudo[85750]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:46 compute-0 sudo[85902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfqltandskdrsyyqfkjxvcrlrqriukzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525446.6060905-476-166405383207293/AnsiballZ_file.py'
Jan 27 14:50:46 compute-0 sudo[85902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:47 compute-0 python3.9[85904]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:47 compute-0 sudo[85902]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:47 compute-0 sudo[86054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqogcurpyucvyssrnketcizxzbcllrie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525447.2216902-484-211275011056807/AnsiballZ_stat.py'
Jan 27 14:50:47 compute-0 sudo[86054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:47 compute-0 python3.9[86056]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:47 compute-0 sudo[86054]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:48 compute-0 sudo[86177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjsczidgoneigymdiiumxaomhunswrfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525447.2216902-484-211275011056807/AnsiballZ_copy.py'
Jan 27 14:50:48 compute-0 sudo[86177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:48 compute-0 python3.9[86179]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525447.2216902-484-211275011056807/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2f887b8856f7683bf37464f08df3e925386e9ebd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:48 compute-0 sudo[86177]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:48 compute-0 sudo[86329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sodofqfnlmsorrjdokeaqbtyssfbdjpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525448.576732-500-21851115612513/AnsiballZ_file.py'
Jan 27 14:50:48 compute-0 sudo[86329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:49 compute-0 python3.9[86331]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:49 compute-0 sudo[86329]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:49 compute-0 sudo[86481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdvarwepenzrglsitdxroygsrhhrhdrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525449.5519683-508-6935954135329/AnsiballZ_stat.py'
Jan 27 14:50:49 compute-0 sudo[86481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:50 compute-0 python3.9[86483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:50 compute-0 sudo[86481]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:50 compute-0 sudo[86604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqgrdshaepnqbwpqlqhauvyzsvdnlgti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525449.5519683-508-6935954135329/AnsiballZ_copy.py'
Jan 27 14:50:50 compute-0 sudo[86604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:50 compute-0 python3.9[86606]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525449.5519683-508-6935954135329/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2f887b8856f7683bf37464f08df3e925386e9ebd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:50 compute-0 sudo[86604]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:51 compute-0 sudo[86756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eucxreqyuosokifssshzbizhtcwifrae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525450.782592-524-185075748207363/AnsiballZ_file.py'
Jan 27 14:50:51 compute-0 sudo[86756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:51 compute-0 python3.9[86758]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:50:51 compute-0 sudo[86756]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:51 compute-0 sudo[86908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tabgrwyxikahzwdbjzojmdiexdsslrwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525451.4419003-532-222425409494002/AnsiballZ_stat.py'
Jan 27 14:50:51 compute-0 sudo[86908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:51 compute-0 python3.9[86910]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:50:51 compute-0 sudo[86908]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:52 compute-0 sudo[87031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzfcrdjmlabuzrwmadcrqeqbhynndyxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525451.4419003-532-222425409494002/AnsiballZ_copy.py'
Jan 27 14:50:52 compute-0 sudo[87031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:50:52 compute-0 python3.9[87033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525451.4419003-532-222425409494002/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2f887b8856f7683bf37464f08df3e925386e9ebd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:50:52 compute-0 sudo[87031]: pam_unix(sudo:session): session closed for user root
Jan 27 14:50:53 compute-0 sshd-session[77818]: Connection closed by 192.168.122.30 port 37606
Jan 27 14:50:53 compute-0 sshd-session[77815]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:50:53 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 27 14:50:53 compute-0 systemd[1]: session-19.scope: Consumed 32.915s CPU time.
Jan 27 14:50:53 compute-0 systemd-logind[820]: Session 19 logged out. Waiting for processes to exit.
Jan 27 14:50:53 compute-0 systemd-logind[820]: Removed session 19.
Jan 27 14:50:59 compute-0 sshd-session[87059]: Accepted publickey for zuul from 192.168.122.30 port 60556 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:51:00 compute-0 systemd-logind[820]: New session 20 of user zuul.
Jan 27 14:51:00 compute-0 systemd[1]: Started Session 20 of User zuul.
Jan 27 14:51:00 compute-0 sshd-session[87059]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:51:01 compute-0 python3.9[87212]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:51:02 compute-0 sudo[87366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmyxzwpuxulxinsjeblggwwkxynevvyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525461.7701242-29-150858537655462/AnsiballZ_file.py'
Jan 27 14:51:02 compute-0 sudo[87366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:02 compute-0 python3.9[87368]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:51:02 compute-0 sudo[87366]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:02 compute-0 sudo[87518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pngsixksvrseqbfhylsxjnmtfahrbhrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525462.5990672-29-112107521186887/AnsiballZ_file.py'
Jan 27 14:51:02 compute-0 sudo[87518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:03 compute-0 python3.9[87520]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:51:03 compute-0 sudo[87518]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:03 compute-0 python3.9[87670]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:51:04 compute-0 sudo[87820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oswndftxhodlavfivmcdqumxwldeuanh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525464.0426526-52-153920890476547/AnsiballZ_seboolean.py'
Jan 27 14:51:04 compute-0 sudo[87820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:04 compute-0 python3.9[87822]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 27 14:51:06 compute-0 sudo[87820]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:06 compute-0 sudo[87976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggcrnvcxtrywbuhrckovdmmuwiwbmhku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525466.575218-62-231852286062048/AnsiballZ_setup.py'
Jan 27 14:51:06 compute-0 dbus-broker-launch[811]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 27 14:51:06 compute-0 sudo[87976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:07 compute-0 python3.9[87978]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 14:51:07 compute-0 sudo[87976]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:07 compute-0 sudo[88060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbyramysmzlzytkrhiyhbdgjclujjryd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525466.575218-62-231852286062048/AnsiballZ_dnf.py'
Jan 27 14:51:07 compute-0 sudo[88060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:08 compute-0 python3.9[88062]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:51:09 compute-0 sudo[88060]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:10 compute-0 sudo[88213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsduopypkhruizbqvdhqbxhcakbreyko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525469.6703763-74-267749905324648/AnsiballZ_systemd.py'
Jan 27 14:51:10 compute-0 sudo[88213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:10 compute-0 python3.9[88215]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 14:51:10 compute-0 sudo[88213]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:11 compute-0 sudo[88368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrraqbqmwnqcmmqtwraexymsaxlxbzvu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769525470.8935502-82-39782333786242/AnsiballZ_edpm_nftables_snippet.py'
Jan 27 14:51:11 compute-0 sudo[88368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:11 compute-0 python3[88370]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 27 14:51:11 compute-0 sudo[88368]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:12 compute-0 sudo[88520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiwwxpaohzhupeyqyogwyharhmodastg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525471.8082812-91-176699116950743/AnsiballZ_file.py'
Jan 27 14:51:12 compute-0 sudo[88520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:12 compute-0 python3.9[88522]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:12 compute-0 sudo[88520]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:12 compute-0 sudo[88672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhoioewdynzplvrxrrtdxzmqmymtalba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525472.4769194-99-277776515340850/AnsiballZ_stat.py'
Jan 27 14:51:12 compute-0 sudo[88672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:13 compute-0 python3.9[88674]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:51:13 compute-0 sudo[88672]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:13 compute-0 sudo[88750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quifwqicortlsjfvrdxvmjdrllnthkin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525472.4769194-99-277776515340850/AnsiballZ_file.py'
Jan 27 14:51:13 compute-0 sudo[88750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:13 compute-0 python3.9[88752]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:13 compute-0 sudo[88750]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:14 compute-0 sudo[88902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhmxnuurjvwgbolnyfrhvvqwunojynan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525473.8065429-111-226682374302119/AnsiballZ_stat.py'
Jan 27 14:51:14 compute-0 sudo[88902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:14 compute-0 python3.9[88904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:51:14 compute-0 sudo[88902]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:14 compute-0 sudo[88980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvwzmqzgmpayuubhuqdvprmsiakvpmkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525473.8065429-111-226682374302119/AnsiballZ_file.py'
Jan 27 14:51:14 compute-0 sudo[88980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:14 compute-0 python3.9[88982]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.gep8h_3h recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:14 compute-0 sudo[88980]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:15 compute-0 sudo[89132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-regotmhejfnprrdudkaudvpqsupqsmbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525474.9984202-123-190288935758562/AnsiballZ_stat.py'
Jan 27 14:51:15 compute-0 sudo[89132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:15 compute-0 python3.9[89134]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:51:15 compute-0 sudo[89132]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:15 compute-0 sudo[89210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhqvicvdotonbznddirubwiudmfueutv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525474.9984202-123-190288935758562/AnsiballZ_file.py'
Jan 27 14:51:15 compute-0 sudo[89210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:16 compute-0 python3.9[89212]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:16 compute-0 sudo[89210]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:16 compute-0 sudo[89362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaqewoaegpgfjdnalantjphbpezukpwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525476.242291-136-162009363474591/AnsiballZ_command.py'
Jan 27 14:51:16 compute-0 sudo[89362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:16 compute-0 python3.9[89364]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:51:16 compute-0 sudo[89362]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:17 compute-0 sudo[89515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igyxljlbsgsagcztsayhnmwetfwjcsqe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769525477.1028-144-34139479880429/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 14:51:17 compute-0 sudo[89515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:17 compute-0 python3[89517]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 14:51:17 compute-0 sudo[89515]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:18 compute-0 sudo[89667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnspwjqwdexxpoqvrljsnrxopwjtkdgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525477.9979513-152-9874133455799/AnsiballZ_stat.py'
Jan 27 14:51:18 compute-0 sudo[89667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:18 compute-0 python3.9[89669]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:51:18 compute-0 sudo[89667]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:19 compute-0 sudo[89792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfhbdxhmgkyspnbwfrntcbkzhvdxnozk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525477.9979513-152-9874133455799/AnsiballZ_copy.py'
Jan 27 14:51:19 compute-0 sudo[89792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:19 compute-0 python3.9[89794]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525477.9979513-152-9874133455799/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:19 compute-0 sudo[89792]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:19 compute-0 sudo[89944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-narnpxiomkbbkkyqpzgvdxueoeatwqce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525479.6387205-167-145537070557018/AnsiballZ_stat.py'
Jan 27 14:51:19 compute-0 sudo[89944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:20 compute-0 python3.9[89946]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:51:20 compute-0 sudo[89944]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:20 compute-0 sudo[90069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-souxujohpamskafmhwmbdnihgpfbmirs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525479.6387205-167-145537070557018/AnsiballZ_copy.py'
Jan 27 14:51:20 compute-0 sudo[90069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:20 compute-0 python3.9[90071]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525479.6387205-167-145537070557018/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:20 compute-0 sudo[90069]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:21 compute-0 sudo[90221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biawjjowxuyvsdvybcvqeoclxrnoxyfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525480.9059918-182-72390232047773/AnsiballZ_stat.py'
Jan 27 14:51:21 compute-0 sudo[90221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:21 compute-0 python3.9[90223]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:51:21 compute-0 sudo[90221]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:21 compute-0 sudo[90346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sivowxuxpmlvtyipvyexaujpcfjivkeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525480.9059918-182-72390232047773/AnsiballZ_copy.py'
Jan 27 14:51:21 compute-0 sudo[90346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:21 compute-0 python3.9[90348]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525480.9059918-182-72390232047773/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:21 compute-0 sudo[90346]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:22 compute-0 sudo[90498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhyobdbcjbhigsosrriupblnieucecko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525482.1576333-197-150846472478385/AnsiballZ_stat.py'
Jan 27 14:51:22 compute-0 sudo[90498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:22 compute-0 python3.9[90500]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:51:22 compute-0 sudo[90498]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:23 compute-0 sudo[90623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsweamdnlnadbqcyrbjqxypwqyuwgysf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525482.1576333-197-150846472478385/AnsiballZ_copy.py'
Jan 27 14:51:23 compute-0 sudo[90623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:23 compute-0 python3.9[90625]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525482.1576333-197-150846472478385/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:23 compute-0 sudo[90623]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:24 compute-0 sudo[90775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vspqfvbuzhdylwzrvadwcrxpooznymgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525483.6575665-212-47728873348645/AnsiballZ_stat.py'
Jan 27 14:51:24 compute-0 sudo[90775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:24 compute-0 python3.9[90777]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:51:24 compute-0 sudo[90775]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:24 compute-0 sudo[90900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teejxlhjhvwgvgwpxykfffrvzjzksefi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525483.6575665-212-47728873348645/AnsiballZ_copy.py'
Jan 27 14:51:24 compute-0 sudo[90900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:24 compute-0 python3.9[90902]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525483.6575665-212-47728873348645/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:24 compute-0 sudo[90900]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:25 compute-0 sudo[91052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjeocirlkkulrufdxbygpipvwpzycsjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525485.022272-227-262692756342429/AnsiballZ_file.py'
Jan 27 14:51:25 compute-0 sudo[91052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:25 compute-0 python3.9[91054]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:25 compute-0 sudo[91052]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:25 compute-0 sudo[91204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkodaswpkujgmdmlpvaydbukwkvojeop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525485.6839104-235-155776390036639/AnsiballZ_command.py'
Jan 27 14:51:25 compute-0 sudo[91204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:26 compute-0 python3.9[91206]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:51:26 compute-0 sudo[91204]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:26 compute-0 sudo[91359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmbkkgrlxvdqdxoisxtppghzedgofiqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525486.3959079-243-16292597630737/AnsiballZ_blockinfile.py'
Jan 27 14:51:26 compute-0 sudo[91359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:27 compute-0 python3.9[91361]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:27 compute-0 sudo[91359]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:27 compute-0 sudo[91511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqjxxotmcwfqqfnxgacnuwzbebhnlezy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525487.5681412-252-212585958113351/AnsiballZ_command.py'
Jan 27 14:51:27 compute-0 sudo[91511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:28 compute-0 python3.9[91513]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:51:28 compute-0 sudo[91511]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:28 compute-0 sudo[91664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haykpuvzqzvbdszxrjbzudqjekvngpxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525488.264803-260-257644952773519/AnsiballZ_stat.py'
Jan 27 14:51:28 compute-0 sudo[91664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:28 compute-0 python3.9[91666]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:51:28 compute-0 sudo[91664]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:29 compute-0 sudo[91818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruvajzthttwmxiplokqgustxboyoktuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525488.8844776-268-68557331291084/AnsiballZ_command.py'
Jan 27 14:51:29 compute-0 sudo[91818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:29 compute-0 python3.9[91820]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:51:29 compute-0 sudo[91818]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:29 compute-0 sudo[91973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izwkzovynpwgzvppzevxgztiknurelsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525489.6081748-276-83930747189433/AnsiballZ_file.py'
Jan 27 14:51:29 compute-0 sudo[91973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:30 compute-0 python3.9[91975]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:30 compute-0 sudo[91973]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:31 compute-0 python3.9[92125]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:51:32 compute-0 sudo[92276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mntuvewtghwbxokxxrgoiunluoezsxlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525492.1584675-316-186621885567995/AnsiballZ_command.py'
Jan 27 14:51:32 compute-0 sudo[92276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:32 compute-0 python3.9[92278]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:51:32 compute-0 ovs-vsctl[92279]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 27 14:51:32 compute-0 sudo[92276]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:33 compute-0 sudo[92430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kodxfcwefyhjlzlvgaiedagqxlvrrkot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525492.8476987-325-198130057456213/AnsiballZ_command.py'
Jan 27 14:51:33 compute-0 sudo[92430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:33 compute-0 python3.9[92432]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:51:33 compute-0 sudo[92430]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:33 compute-0 sudo[92585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbjokpfgpkkxznkirdzqetzzewaznolo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525493.519709-333-193668095341314/AnsiballZ_command.py'
Jan 27 14:51:33 compute-0 sudo[92585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:34 compute-0 python3.9[92587]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:51:34 compute-0 ovs-vsctl[92588]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 27 14:51:34 compute-0 sudo[92585]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:34 compute-0 python3.9[92738]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:51:35 compute-0 sudo[92890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfalihyqedojlhnbkdyytkqllloqhccg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525494.9537203-350-257159518317284/AnsiballZ_file.py'
Jan 27 14:51:35 compute-0 sudo[92890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:35 compute-0 python3.9[92892]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:51:35 compute-0 sudo[92890]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:36 compute-0 sshd-session[92893]: Connection closed by 13.221.92.241 port 37512 [preauth]
Jan 27 14:51:36 compute-0 sudo[93044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dytygrkopajxinzogifimeungpdhytlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525495.8228252-358-212498924268747/AnsiballZ_stat.py'
Jan 27 14:51:36 compute-0 sudo[93044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:36 compute-0 python3.9[93046]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:51:36 compute-0 sudo[93044]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:36 compute-0 sudo[93122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrpkydarhhbxfjxhylhpripavxteolbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525495.8228252-358-212498924268747/AnsiballZ_file.py'
Jan 27 14:51:36 compute-0 sudo[93122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:36 compute-0 python3.9[93124]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:51:36 compute-0 sudo[93122]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:37 compute-0 sudo[93274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iomyvcbfahtagyqdntjaslamcfxbrxdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525496.961986-358-88080700213599/AnsiballZ_stat.py'
Jan 27 14:51:37 compute-0 sudo[93274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:37 compute-0 python3.9[93276]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:51:37 compute-0 sudo[93274]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:37 compute-0 sudo[93352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swswsjtgcondoohcxrtnhlyhqoeeelln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525496.961986-358-88080700213599/AnsiballZ_file.py'
Jan 27 14:51:37 compute-0 sudo[93352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:37 compute-0 python3.9[93354]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:51:37 compute-0 sudo[93352]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:38 compute-0 sudo[93504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nohcttjoyysuiewraebzpnqhnpotbgkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525498.1415422-381-212793707986255/AnsiballZ_file.py'
Jan 27 14:51:38 compute-0 sudo[93504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:38 compute-0 python3.9[93506]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:38 compute-0 sudo[93504]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:39 compute-0 sudo[93656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmiperggwzzbabhktlktxlulqaupbrwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525498.7778122-389-161916959846073/AnsiballZ_stat.py'
Jan 27 14:51:39 compute-0 sudo[93656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:39 compute-0 python3.9[93658]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:51:39 compute-0 sudo[93656]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:39 compute-0 sudo[93734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfigfcssxpekcjnuvmndxvryuapawzcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525498.7778122-389-161916959846073/AnsiballZ_file.py'
Jan 27 14:51:39 compute-0 sudo[93734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:39 compute-0 python3.9[93736]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:39 compute-0 sudo[93734]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:40 compute-0 sudo[93886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvwphoqrhdcugpvpxzlxonntruvbucac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525500.0179093-401-257900790885227/AnsiballZ_stat.py'
Jan 27 14:51:40 compute-0 sudo[93886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:40 compute-0 python3.9[93888]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:51:40 compute-0 sudo[93886]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:40 compute-0 sudo[93964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idchndtzllhmzqiufnliuigwgkmoxhox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525500.0179093-401-257900790885227/AnsiballZ_file.py'
Jan 27 14:51:40 compute-0 sudo[93964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:40 compute-0 python3.9[93966]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:41 compute-0 sudo[93964]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:41 compute-0 sudo[94116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzluphbpmiguihaipjcwnhvdswejmlct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525501.1563406-413-158340262132117/AnsiballZ_systemd.py'
Jan 27 14:51:41 compute-0 sudo[94116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:41 compute-0 python3.9[94118]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:51:41 compute-0 systemd[1]: Reloading.
Jan 27 14:51:41 compute-0 systemd-rc-local-generator[94145]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:51:41 compute-0 systemd-sysv-generator[94149]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:51:42 compute-0 sudo[94116]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:42 compute-0 sudo[94304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lawgdmjbbrfrtaonghwqggevtqrzmaqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525502.3006818-421-269893008649869/AnsiballZ_stat.py'
Jan 27 14:51:42 compute-0 sudo[94304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:42 compute-0 python3.9[94306]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:51:42 compute-0 sudo[94304]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:43 compute-0 sudo[94382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoywbvwmwtbhnlhjmvyigxcbmoietcif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525502.3006818-421-269893008649869/AnsiballZ_file.py'
Jan 27 14:51:43 compute-0 sudo[94382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:43 compute-0 python3.9[94384]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:43 compute-0 sudo[94382]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:43 compute-0 sudo[94534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqxotprmukthvcequgontgznugddwreh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525503.475911-433-118587131079009/AnsiballZ_stat.py'
Jan 27 14:51:43 compute-0 sudo[94534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:43 compute-0 python3.9[94536]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:51:44 compute-0 sudo[94534]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:44 compute-0 sudo[94612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geihufuaaeweyaiwombkslzvxwogcouw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525503.475911-433-118587131079009/AnsiballZ_file.py'
Jan 27 14:51:44 compute-0 sudo[94612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:44 compute-0 python3.9[94614]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:44 compute-0 sudo[94612]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:44 compute-0 sudo[94764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbhczxpbstncdbkevewgvttwobpelmii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525504.7136054-445-93929437201348/AnsiballZ_systemd.py'
Jan 27 14:51:44 compute-0 sudo[94764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:45 compute-0 python3.9[94766]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:51:45 compute-0 systemd[1]: Reloading.
Jan 27 14:51:45 compute-0 systemd-sysv-generator[94799]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:51:45 compute-0 systemd-rc-local-generator[94796]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:51:45 compute-0 systemd[1]: Starting Create netns directory...
Jan 27 14:51:45 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 14:51:45 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 14:51:45 compute-0 systemd[1]: Finished Create netns directory.
Jan 27 14:51:45 compute-0 sudo[94764]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:46 compute-0 sudo[94959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grmmsvhsufudypmjeojiewxipnpqrent ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525505.8990793-455-128253660734683/AnsiballZ_file.py'
Jan 27 14:51:46 compute-0 sudo[94959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:46 compute-0 python3.9[94961]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:51:46 compute-0 sudo[94959]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:46 compute-0 sudo[95111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imoujrpvclrlpcvobaghsvmvwppftfqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525506.5478184-463-253432180951462/AnsiballZ_stat.py'
Jan 27 14:51:46 compute-0 sudo[95111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:47 compute-0 python3.9[95113]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:51:47 compute-0 sudo[95111]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:47 compute-0 sudo[95234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyhwigzxmxqyvbqexwvzapzybijebvhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525506.5478184-463-253432180951462/AnsiballZ_copy.py'
Jan 27 14:51:47 compute-0 sudo[95234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:47 compute-0 python3.9[95236]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525506.5478184-463-253432180951462/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:51:47 compute-0 sudo[95234]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:48 compute-0 sudo[95386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftvosgjrzulxpxqwbhgjenrnyaimsdog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525507.9558187-480-214173716661840/AnsiballZ_file.py'
Jan 27 14:51:48 compute-0 sudo[95386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:48 compute-0 python3.9[95388]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:48 compute-0 sudo[95386]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:48 compute-0 sudo[95538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckfznqhilhseiqctlfejgcigyketexiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525508.5513494-488-88056206199602/AnsiballZ_file.py'
Jan 27 14:51:48 compute-0 sudo[95538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:49 compute-0 python3.9[95540]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:51:49 compute-0 sudo[95538]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:49 compute-0 sudo[95690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyvyzhjaxlvjuykgjfaswoshdxiavdod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525509.2385461-496-268577435225400/AnsiballZ_stat.py'
Jan 27 14:51:49 compute-0 sudo[95690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:49 compute-0 python3.9[95692]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:51:49 compute-0 sudo[95690]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:50 compute-0 sudo[95813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtioynbdkdfjrwacphsameoblcnwxjqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525509.2385461-496-268577435225400/AnsiballZ_copy.py'
Jan 27 14:51:50 compute-0 sudo[95813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:50 compute-0 python3.9[95815]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525509.2385461-496-268577435225400/.source.json _original_basename=.klli7bw1 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:50 compute-0 sudo[95813]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:51 compute-0 python3.9[95965]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:52 compute-0 sudo[96386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnyrvkzyxraksamaiqojhorznowqwbjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525512.5723639-536-64029222854908/AnsiballZ_container_config_data.py'
Jan 27 14:51:52 compute-0 sudo[96386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:53 compute-0 python3.9[96388]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 27 14:51:53 compute-0 sudo[96386]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:53 compute-0 sudo[96538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vahjumvgshavbrykvzparmlqbjqxbkja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525513.5309598-547-73206835561785/AnsiballZ_container_config_hash.py'
Jan 27 14:51:53 compute-0 sudo[96538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:54 compute-0 python3.9[96540]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 14:51:54 compute-0 sudo[96538]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:54 compute-0 sudo[96690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enqyqweutfxvcdtrtcfrnjipdnvrazws ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769525514.429709-557-65782141873371/AnsiballZ_edpm_container_manage.py'
Jan 27 14:51:54 compute-0 sudo[96690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:55 compute-0 python3[96692]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 14:51:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:51:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:51:55 compute-0 podman[96728]: 2026-01-27 14:51:55.465141309 +0000 UTC m=+0.024560058 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 27 14:51:55 compute-0 podman[96728]: 2026-01-27 14:51:55.775128948 +0000 UTC m=+0.334547677 container create e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:51:55 compute-0 python3[96692]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 27 14:51:55 compute-0 sudo[96690]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 14:51:56 compute-0 sudo[96916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zalgdalyvkuwtgekfazfhozvspsmuwwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525516.0666854-565-131989975222100/AnsiballZ_stat.py'
Jan 27 14:51:56 compute-0 sudo[96916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:56 compute-0 python3.9[96918]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:51:56 compute-0 sudo[96916]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:57 compute-0 sudo[97070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-retvhjgdsycxzfjelofaxaufzvmvfxwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525516.8024356-574-191606539304933/AnsiballZ_file.py'
Jan 27 14:51:57 compute-0 sudo[97070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:57 compute-0 python3.9[97072]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:57 compute-0 sudo[97070]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:57 compute-0 sudo[97146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnckmimiihjanidclrtgmucnnmjhttfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525516.8024356-574-191606539304933/AnsiballZ_stat.py'
Jan 27 14:51:57 compute-0 sudo[97146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:57 compute-0 python3.9[97148]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:51:57 compute-0 sudo[97146]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:58 compute-0 sudo[97297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oelglotezvjxyjtorhkaikobgwrpbnur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525518.009942-574-106106077671008/AnsiballZ_copy.py'
Jan 27 14:51:58 compute-0 sudo[97297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:58 compute-0 python3.9[97299]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769525518.009942-574-106106077671008/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:51:58 compute-0 sudo[97297]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:58 compute-0 sudo[97373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufpncdhwpieqjyrbpxzwryjbihbahhks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525518.009942-574-106106077671008/AnsiballZ_systemd.py'
Jan 27 14:51:58 compute-0 sudo[97373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:51:59 compute-0 python3.9[97375]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 14:51:59 compute-0 systemd[1]: Reloading.
Jan 27 14:51:59 compute-0 systemd-rc-local-generator[97402]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:51:59 compute-0 systemd-sysv-generator[97405]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:51:59 compute-0 sudo[97373]: pam_unix(sudo:session): session closed for user root
Jan 27 14:51:59 compute-0 sudo[97483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkvjvvqkhghdalsupedtynodycdectez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525518.009942-574-106106077671008/AnsiballZ_systemd.py'
Jan 27 14:51:59 compute-0 sudo[97483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:00 compute-0 python3.9[97485]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:52:00 compute-0 systemd[1]: Reloading.
Jan 27 14:52:00 compute-0 systemd-rc-local-generator[97513]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:52:00 compute-0 systemd-sysv-generator[97517]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:52:00 compute-0 systemd[1]: Starting ovn_controller container...
Jan 27 14:52:00 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 27 14:52:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb61b24b669d629b38ae6f2025a66840d2638b2062f65b5b2da5c16b6e30134/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 27 14:52:00 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014.
Jan 27 14:52:00 compute-0 podman[97525]: 2026-01-27 14:52:00.72098402 +0000 UTC m=+0.166615001 container init e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 14:52:00 compute-0 ovn_controller[97541]: + sudo -E kolla_set_configs
Jan 27 14:52:00 compute-0 podman[97525]: 2026-01-27 14:52:00.745237559 +0000 UTC m=+0.190868540 container start e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 27 14:52:00 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 27 14:52:00 compute-0 edpm-start-podman-container[97525]: ovn_controller
Jan 27 14:52:00 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 27 14:52:00 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 27 14:52:00 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 27 14:52:00 compute-0 systemd[97576]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 27 14:52:00 compute-0 edpm-start-podman-container[97524]: Creating additional drop-in dependency for "ovn_controller" (e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014)
Jan 27 14:52:00 compute-0 systemd[1]: Reloading.
Jan 27 14:52:00 compute-0 podman[97548]: 2026-01-27 14:52:00.855026333 +0000 UTC m=+0.096170986 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:52:00 compute-0 systemd-rc-local-generator[97629]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:52:00 compute-0 systemd-sysv-generator[97633]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:52:00 compute-0 systemd[97576]: Queued start job for default target Main User Target.
Jan 27 14:52:00 compute-0 systemd[97576]: Created slice User Application Slice.
Jan 27 14:52:00 compute-0 systemd[97576]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 27 14:52:00 compute-0 systemd[97576]: Started Daily Cleanup of User's Temporary Directories.
Jan 27 14:52:00 compute-0 systemd[97576]: Reached target Paths.
Jan 27 14:52:00 compute-0 systemd[97576]: Reached target Timers.
Jan 27 14:52:00 compute-0 systemd[97576]: Starting D-Bus User Message Bus Socket...
Jan 27 14:52:00 compute-0 systemd[97576]: Starting Create User's Volatile Files and Directories...
Jan 27 14:52:00 compute-0 systemd[97576]: Finished Create User's Volatile Files and Directories.
Jan 27 14:52:00 compute-0 systemd[97576]: Listening on D-Bus User Message Bus Socket.
Jan 27 14:52:00 compute-0 systemd[97576]: Reached target Sockets.
Jan 27 14:52:00 compute-0 systemd[97576]: Reached target Basic System.
Jan 27 14:52:00 compute-0 systemd[97576]: Reached target Main User Target.
Jan 27 14:52:00 compute-0 systemd[97576]: Startup finished in 138ms.
Jan 27 14:52:01 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 27 14:52:01 compute-0 systemd[1]: Started ovn_controller container.
Jan 27 14:52:01 compute-0 systemd[1]: e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014-1ab52dca353ca232.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 14:52:01 compute-0 systemd[1]: e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014-1ab52dca353ca232.service: Failed with result 'exit-code'.
Jan 27 14:52:01 compute-0 systemd[1]: Started Session c1 of User root.
Jan 27 14:52:01 compute-0 sudo[97483]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:01 compute-0 ovn_controller[97541]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 14:52:01 compute-0 ovn_controller[97541]: INFO:__main__:Validating config file
Jan 27 14:52:01 compute-0 ovn_controller[97541]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 14:52:01 compute-0 ovn_controller[97541]: INFO:__main__:Writing out command to execute
Jan 27 14:52:01 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 27 14:52:01 compute-0 ovn_controller[97541]: ++ cat /run_command
Jan 27 14:52:01 compute-0 ovn_controller[97541]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 27 14:52:01 compute-0 ovn_controller[97541]: + ARGS=
Jan 27 14:52:01 compute-0 ovn_controller[97541]: + sudo kolla_copy_cacerts
Jan 27 14:52:01 compute-0 systemd[1]: Started Session c2 of User root.
Jan 27 14:52:01 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 27 14:52:01 compute-0 ovn_controller[97541]: + [[ ! -n '' ]]
Jan 27 14:52:01 compute-0 ovn_controller[97541]: + . kolla_extend_start
Jan 27 14:52:01 compute-0 ovn_controller[97541]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 27 14:52:01 compute-0 ovn_controller[97541]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 27 14:52:01 compute-0 ovn_controller[97541]: + umask 0022
Jan 27 14:52:01 compute-0 ovn_controller[97541]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 27 14:52:01 compute-0 NetworkManager[56090]: <info>  [1769525521.2153] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 27 14:52:01 compute-0 NetworkManager[56090]: <info>  [1769525521.2159] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 14:52:01 compute-0 NetworkManager[56090]: <warn>  [1769525521.2162] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 14:52:01 compute-0 NetworkManager[56090]: <info>  [1769525521.2167] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Jan 27 14:52:01 compute-0 NetworkManager[56090]: <info>  [1769525521.2170] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Jan 27 14:52:01 compute-0 NetworkManager[56090]: <info>  [1769525521.2173] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 27 14:52:01 compute-0 kernel: br-int: entered promiscuous mode
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 14:52:01 compute-0 NetworkManager[56090]: <info>  [1769525521.2412] manager: (ovn-f8094e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 27 14:52:01 compute-0 systemd-udevd[97677]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:52:01 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 27 14:52:01 compute-0 systemd-udevd[97678]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 14:52:01 compute-0 ovn_controller[97541]: 2026-01-27T14:52:01Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 14:52:01 compute-0 NetworkManager[56090]: <info>  [1769525521.2646] device (genev_sys_6081): carrier: link connected
Jan 27 14:52:01 compute-0 NetworkManager[56090]: <info>  [1769525521.2651] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Jan 27 14:52:01 compute-0 python3.9[97807]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 14:52:02 compute-0 sudo[97957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjwwbsjmwtggfxduxrgpepigrilfrplc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525522.3739727-619-158713194084941/AnsiballZ_stat.py'
Jan 27 14:52:02 compute-0 sudo[97957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:02 compute-0 python3.9[97959]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:52:02 compute-0 sudo[97957]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:03 compute-0 sudo[98080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exoomdipzriandwajnjbwssoevapmekg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525522.3739727-619-158713194084941/AnsiballZ_copy.py'
Jan 27 14:52:03 compute-0 sudo[98080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:03 compute-0 python3.9[98082]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525522.3739727-619-158713194084941/.source.yaml _original_basename=.fbmt0hc0 follow=False checksum=57597fc6080decea30c121e085f6e6a8fcdfa10a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:52:03 compute-0 sudo[98080]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:03 compute-0 sudo[98232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvpmazrljiguipyrumzzetttdtalkryy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525523.5703814-634-53708003032110/AnsiballZ_command.py'
Jan 27 14:52:03 compute-0 sudo[98232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:04 compute-0 python3.9[98234]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:52:04 compute-0 ovs-vsctl[98235]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 27 14:52:04 compute-0 sudo[98232]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:04 compute-0 sudo[98385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vreffqauztjdeilfdjraagoagsshmsby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525524.3166416-642-202025920913567/AnsiballZ_command.py'
Jan 27 14:52:04 compute-0 sudo[98385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:04 compute-0 python3.9[98387]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:52:04 compute-0 ovs-vsctl[98389]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 27 14:52:04 compute-0 sudo[98385]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:05 compute-0 sudo[98540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djdpvlmqibcileezkhkvobxvcodlkthk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525525.2444522-656-259078179262144/AnsiballZ_command.py'
Jan 27 14:52:05 compute-0 sudo[98540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:05 compute-0 python3.9[98542]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:52:05 compute-0 ovs-vsctl[98543]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 27 14:52:05 compute-0 sudo[98540]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:06 compute-0 sshd-session[87062]: Connection closed by 192.168.122.30 port 60556
Jan 27 14:52:06 compute-0 sshd-session[87059]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:52:06 compute-0 systemd-logind[820]: Session 20 logged out. Waiting for processes to exit.
Jan 27 14:52:06 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Jan 27 14:52:06 compute-0 systemd[1]: session-20.scope: Consumed 46.416s CPU time.
Jan 27 14:52:06 compute-0 systemd-logind[820]: Removed session 20.
Jan 27 14:52:11 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 27 14:52:11 compute-0 systemd[97576]: Activating special unit Exit the Session...
Jan 27 14:52:11 compute-0 systemd[97576]: Stopped target Main User Target.
Jan 27 14:52:11 compute-0 systemd[97576]: Stopped target Basic System.
Jan 27 14:52:11 compute-0 systemd[97576]: Stopped target Paths.
Jan 27 14:52:11 compute-0 systemd[97576]: Stopped target Sockets.
Jan 27 14:52:11 compute-0 systemd[97576]: Stopped target Timers.
Jan 27 14:52:11 compute-0 systemd[97576]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 27 14:52:11 compute-0 systemd[97576]: Closed D-Bus User Message Bus Socket.
Jan 27 14:52:11 compute-0 systemd[97576]: Stopped Create User's Volatile Files and Directories.
Jan 27 14:52:11 compute-0 systemd[97576]: Removed slice User Application Slice.
Jan 27 14:52:11 compute-0 systemd[97576]: Reached target Shutdown.
Jan 27 14:52:11 compute-0 systemd[97576]: Finished Exit the Session.
Jan 27 14:52:11 compute-0 systemd[97576]: Reached target Exit the Session.
Jan 27 14:52:11 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 27 14:52:11 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 27 14:52:11 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 27 14:52:11 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 27 14:52:11 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 27 14:52:11 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 27 14:52:11 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 27 14:52:12 compute-0 sshd-session[98571]: Accepted publickey for zuul from 192.168.122.30 port 51102 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:52:12 compute-0 systemd-logind[820]: New session 22 of user zuul.
Jan 27 14:52:12 compute-0 systemd[1]: Started Session 22 of User zuul.
Jan 27 14:52:12 compute-0 sshd-session[98571]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:52:13 compute-0 python3.9[98724]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:52:14 compute-0 sudo[98878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cykkgeeuxpnhzplduydnmzxchmvkosmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525534.3896704-29-13099262585507/AnsiballZ_file.py'
Jan 27 14:52:14 compute-0 sudo[98878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:15 compute-0 python3.9[98880]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:15 compute-0 sudo[98878]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:15 compute-0 sudo[99030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygbgqcjzvsqstaajkhsynuyugyovbydy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525535.3400934-29-90049566278066/AnsiballZ_file.py'
Jan 27 14:52:15 compute-0 sudo[99030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:15 compute-0 python3.9[99032]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:15 compute-0 sudo[99030]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:16 compute-0 sudo[99182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivzcivquxfwvblujwsejsvltvsjxvudo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525536.0180795-29-16262082462175/AnsiballZ_file.py'
Jan 27 14:52:16 compute-0 sudo[99182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:16 compute-0 python3.9[99184]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:16 compute-0 sudo[99182]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:16 compute-0 sudo[99334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inxyiqlxczmlzxercjmbgpgkydxzdjfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525536.6479874-29-168803966194156/AnsiballZ_file.py'
Jan 27 14:52:16 compute-0 sudo[99334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:17 compute-0 python3.9[99336]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:17 compute-0 sudo[99334]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:17 compute-0 sudo[99486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcjudwhtimkqvmveldoevscbipmmwsvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525537.2614605-29-14321948273100/AnsiballZ_file.py'
Jan 27 14:52:17 compute-0 sudo[99486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:17 compute-0 python3.9[99488]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:17 compute-0 sudo[99486]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:18 compute-0 python3.9[99638]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:52:19 compute-0 sudo[99789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axpvvktlqmedzbuxrxcenablmvxxxpyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525538.67813-73-44844980574656/AnsiballZ_seboolean.py'
Jan 27 14:52:19 compute-0 sudo[99789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:19 compute-0 python3.9[99791]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 27 14:52:19 compute-0 sudo[99789]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:21 compute-0 python3.9[99942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:52:22 compute-0 python3.9[100063]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525540.4271224-81-136155564007480/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:22 compute-0 python3.9[100213]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:52:23 compute-0 python3.9[100334]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525542.273685-96-244696842455920/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:23 compute-0 sudo[100484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofmnbnvslpcfyximduwgawwujlxswlfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525543.512718-113-59593339793758/AnsiballZ_setup.py'
Jan 27 14:52:23 compute-0 sudo[100484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:24 compute-0 python3.9[100486]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 14:52:24 compute-0 sudo[100484]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:24 compute-0 sudo[100568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-migxrtzhmzhpjebidkeznpjwqcmcvwvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525543.512718-113-59593339793758/AnsiballZ_dnf.py'
Jan 27 14:52:24 compute-0 sudo[100568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:24 compute-0 python3.9[100570]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:52:26 compute-0 sudo[100568]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:27 compute-0 sudo[100721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvhyymoizibbmnwtgtofrggnfhfimdwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525546.4390507-125-188621758513091/AnsiballZ_systemd.py'
Jan 27 14:52:27 compute-0 sudo[100721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:27 compute-0 python3.9[100723]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 14:52:27 compute-0 sudo[100721]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:28 compute-0 python3.9[100876]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:52:29 compute-0 python3.9[100997]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525547.659247-133-82175822172349/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:29 compute-0 python3.9[101147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:52:30 compute-0 python3.9[101268]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525549.1516585-133-7210200830208/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:31 compute-0 ovn_controller[97541]: 2026-01-27T14:52:31Z|00025|memory|INFO|16128 kB peak resident set size after 30.1 seconds
Jan 27 14:52:31 compute-0 ovn_controller[97541]: 2026-01-27T14:52:31Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 27 14:52:31 compute-0 podman[101368]: 2026-01-27 14:52:31.330692083 +0000 UTC m=+0.090571898 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller)
Jan 27 14:52:31 compute-0 python3.9[101444]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:52:32 compute-0 python3.9[101566]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525551.0828476-177-141030548206796/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:32 compute-0 python3.9[101716]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:52:33 compute-0 python3.9[101837]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525552.3107011-177-72047280919329/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:33 compute-0 python3.9[101987]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:52:34 compute-0 sudo[102139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trasamknivkahwtvetoipdejyjkonagt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525554.2421517-215-18236397602552/AnsiballZ_file.py'
Jan 27 14:52:34 compute-0 sudo[102139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:34 compute-0 python3.9[102141]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:34 compute-0 sudo[102139]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:35 compute-0 sudo[102291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clqlxrsizyxnojtpzkcocvshccsnikox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525554.9170842-223-151174710293448/AnsiballZ_stat.py'
Jan 27 14:52:35 compute-0 sudo[102291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:35 compute-0 python3.9[102293]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:52:35 compute-0 sudo[102291]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:35 compute-0 sudo[102369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiajnbzsqwpjdyzlastrqogdwqwevbum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525554.9170842-223-151174710293448/AnsiballZ_file.py'
Jan 27 14:52:35 compute-0 sudo[102369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:35 compute-0 python3.9[102371]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:35 compute-0 sudo[102369]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:36 compute-0 sudo[102521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvavrubbdiaipfuirkijtnlzkldpeisx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525555.9993799-223-149087052903149/AnsiballZ_stat.py'
Jan 27 14:52:36 compute-0 sudo[102521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:36 compute-0 python3.9[102523]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:52:36 compute-0 sudo[102521]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:36 compute-0 sudo[102599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqooiszcttrdqvllqzkclvfxzmhydhcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525555.9993799-223-149087052903149/AnsiballZ_file.py'
Jan 27 14:52:36 compute-0 sudo[102599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:36 compute-0 python3.9[102601]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:36 compute-0 sudo[102599]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:37 compute-0 sudo[102751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdkvklyanidwlzymvspdvcmxvyemixfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525557.0296755-246-165117337477142/AnsiballZ_file.py'
Jan 27 14:52:37 compute-0 sudo[102751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:37 compute-0 python3.9[102753]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:52:37 compute-0 sudo[102751]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:37 compute-0 sudo[102903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krbazimfdexbuqnmmqsgazfcofyxxdun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525557.6726313-254-246190103309082/AnsiballZ_stat.py'
Jan 27 14:52:37 compute-0 sudo[102903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:38 compute-0 python3.9[102905]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:52:38 compute-0 sudo[102903]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:38 compute-0 sudo[102981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjpxrwdbhfqgtlzxnjgrrpzrxrpqlsha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525557.6726313-254-246190103309082/AnsiballZ_file.py'
Jan 27 14:52:38 compute-0 sudo[102981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:38 compute-0 python3.9[102983]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:52:38 compute-0 sudo[102981]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:39 compute-0 sudo[103133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnfsnlvhjybchhfojzqgklqptzchxjmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525558.7339966-266-109190826453772/AnsiballZ_stat.py'
Jan 27 14:52:39 compute-0 sudo[103133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:39 compute-0 python3.9[103135]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:52:39 compute-0 sudo[103133]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:39 compute-0 sudo[103211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eukgvjqgtwlcfpuhjsdonuigjjcycsmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525558.7339966-266-109190826453772/AnsiballZ_file.py'
Jan 27 14:52:39 compute-0 sudo[103211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:39 compute-0 python3.9[103213]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:52:39 compute-0 sudo[103211]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:40 compute-0 sudo[103363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uufwtxjuwmzazefklzljermpytxwisvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525559.771689-278-194866734154551/AnsiballZ_systemd.py'
Jan 27 14:52:40 compute-0 sudo[103363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:40 compute-0 python3.9[103365]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:52:40 compute-0 systemd[1]: Reloading.
Jan 27 14:52:40 compute-0 systemd-rc-local-generator[103389]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:52:40 compute-0 systemd-sysv-generator[103395]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:52:40 compute-0 sudo[103363]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:40 compute-0 sudo[103552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lviaencmwxjdjqagasysicuerdhraynr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525560.7607577-286-143188763274/AnsiballZ_stat.py'
Jan 27 14:52:41 compute-0 sudo[103552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:41 compute-0 python3.9[103554]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:52:41 compute-0 sudo[103552]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:41 compute-0 sudo[103630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdnasrswguvzlzycxmaexcawelimvrvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525560.7607577-286-143188763274/AnsiballZ_file.py'
Jan 27 14:52:41 compute-0 sudo[103630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:41 compute-0 python3.9[103632]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:52:41 compute-0 sudo[103630]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:42 compute-0 sudo[103782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjwajikcwnrxiznixxixsevxtnyzbabm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525561.7809207-298-227935583787427/AnsiballZ_stat.py'
Jan 27 14:52:42 compute-0 sudo[103782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:42 compute-0 python3.9[103784]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:52:42 compute-0 sudo[103782]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:42 compute-0 sudo[103860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucytvpwpnkzcyhnbejmrflxcekmaqeqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525561.7809207-298-227935583787427/AnsiballZ_file.py'
Jan 27 14:52:42 compute-0 sudo[103860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:42 compute-0 python3.9[103862]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:52:42 compute-0 sudo[103860]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:43 compute-0 sudo[104012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtzuqpktofkuizuazpbrnzljcdqzqrzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525562.8609815-310-19597001953877/AnsiballZ_systemd.py'
Jan 27 14:52:43 compute-0 sudo[104012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:43 compute-0 python3.9[104014]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:52:43 compute-0 systemd[1]: Reloading.
Jan 27 14:52:43 compute-0 systemd-sysv-generator[104040]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:52:43 compute-0 systemd-rc-local-generator[104036]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:52:43 compute-0 systemd[1]: Starting Create netns directory...
Jan 27 14:52:43 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 14:52:43 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 14:52:43 compute-0 systemd[1]: Finished Create netns directory.
Jan 27 14:52:43 compute-0 sudo[104012]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:44 compute-0 sudo[104205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqhglwyfppywtckrgiunqlorqcrpgkpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525563.9831603-320-73012762801933/AnsiballZ_file.py'
Jan 27 14:52:44 compute-0 sudo[104205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:44 compute-0 python3.9[104207]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:44 compute-0 sudo[104205]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:44 compute-0 sudo[104357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhhbidrphqriqvdjdcmyrwiihtejtkeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525564.6397836-328-233809096875391/AnsiballZ_stat.py'
Jan 27 14:52:44 compute-0 sudo[104357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:45 compute-0 python3.9[104359]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:52:45 compute-0 sudo[104357]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:45 compute-0 sudo[104480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amkinsbftrspmudqyjhxjyotwxbrjenb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525564.6397836-328-233809096875391/AnsiballZ_copy.py'
Jan 27 14:52:45 compute-0 sudo[104480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:45 compute-0 python3.9[104482]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525564.6397836-328-233809096875391/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:45 compute-0 sudo[104480]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:46 compute-0 sudo[104632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wblaqohhfevrzuzdwagxgkrqatlkdmjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525565.9881063-345-79487543028939/AnsiballZ_file.py'
Jan 27 14:52:46 compute-0 sudo[104632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:46 compute-0 python3.9[104634]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:52:46 compute-0 sudo[104632]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:46 compute-0 sudo[104784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lglpwrnpqitxscohlneiuyxauiybblss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525566.6175382-353-140189351825816/AnsiballZ_file.py'
Jan 27 14:52:46 compute-0 sudo[104784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:47 compute-0 python3.9[104786]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:52:47 compute-0 sudo[104784]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:47 compute-0 sudo[104936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdtwnuixrbjeixomhhrunydvyjiswdwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525567.2675462-361-13700700692361/AnsiballZ_stat.py'
Jan 27 14:52:47 compute-0 sudo[104936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:47 compute-0 python3.9[104938]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:52:47 compute-0 sudo[104936]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:48 compute-0 sudo[105059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqbjrukczxfhsptqcztenzevusjfgdqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525567.2675462-361-13700700692361/AnsiballZ_copy.py'
Jan 27 14:52:48 compute-0 sudo[105059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:48 compute-0 python3.9[105061]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525567.2675462-361-13700700692361/.source.json _original_basename=.16e7zpxp follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:52:48 compute-0 sudo[105059]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:48 compute-0 python3.9[105211]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:52:50 compute-0 sudo[105632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pparanvxgqojjmkgjyvzorzzpbkybijx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525570.3841364-401-189216878778719/AnsiballZ_container_config_data.py'
Jan 27 14:52:50 compute-0 sudo[105632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:51 compute-0 python3.9[105634]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 27 14:52:51 compute-0 sudo[105632]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:51 compute-0 sudo[105784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfseieifkhijcpqdywaylxufazdplfvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525571.3120825-412-42422301714866/AnsiballZ_container_config_hash.py'
Jan 27 14:52:51 compute-0 sudo[105784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:51 compute-0 python3.9[105786]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 14:52:51 compute-0 sudo[105784]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:52 compute-0 sudo[105936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwomodehfaowykhtymsobqnphlexuqmo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769525572.271025-422-109198379800852/AnsiballZ_edpm_container_manage.py'
Jan 27 14:52:52 compute-0 sudo[105936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:53 compute-0 python3[105938]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 14:52:53 compute-0 podman[105974]: 2026-01-27 14:52:53.27768844 +0000 UTC m=+0.058453693 container create ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 14:52:53 compute-0 podman[105974]: 2026-01-27 14:52:53.239471543 +0000 UTC m=+0.020236806 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:52:53 compute-0 python3[105938]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 14:52:53 compute-0 sudo[105936]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:53 compute-0 sudo[106162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huhtrzqrpnjhatcjlhhnptnmlxemqevg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525573.591221-430-247139704765879/AnsiballZ_stat.py'
Jan 27 14:52:53 compute-0 sudo[106162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:54 compute-0 python3.9[106164]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:52:54 compute-0 sudo[106162]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:54 compute-0 sudo[106316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nidvzltrljcfcugjawxxbhudkloqgjdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525574.5023184-439-61012072220034/AnsiballZ_file.py'
Jan 27 14:52:54 compute-0 sudo[106316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:55 compute-0 python3.9[106318]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:52:55 compute-0 sudo[106316]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:55 compute-0 sudo[106392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgeodpaxxddlmxbmcmdtahbdzjhhgwtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525574.5023184-439-61012072220034/AnsiballZ_stat.py'
Jan 27 14:52:55 compute-0 sudo[106392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:55 compute-0 python3.9[106394]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:52:55 compute-0 sudo[106392]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:56 compute-0 sudo[106543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivheqtagdeigzjvymuyfdqsjblepcyyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525575.5633974-439-121940488925197/AnsiballZ_copy.py'
Jan 27 14:52:56 compute-0 sudo[106543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:56 compute-0 python3.9[106545]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769525575.5633974-439-121940488925197/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:52:56 compute-0 sudo[106543]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:56 compute-0 sudo[106619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xalptaexgsiaciczppgelktsskfrwvrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525575.5633974-439-121940488925197/AnsiballZ_systemd.py'
Jan 27 14:52:56 compute-0 sudo[106619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:56 compute-0 python3.9[106621]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 14:52:56 compute-0 systemd[1]: Reloading.
Jan 27 14:52:57 compute-0 systemd-rc-local-generator[106648]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:52:57 compute-0 systemd-sysv-generator[106652]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:52:57 compute-0 sudo[106619]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:57 compute-0 sudo[106730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lawoisxuhrrzzynvmadmfxuwbjqaxuii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525575.5633974-439-121940488925197/AnsiballZ_systemd.py'
Jan 27 14:52:57 compute-0 sudo[106730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:52:57 compute-0 python3.9[106732]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:52:57 compute-0 systemd[1]: Reloading.
Jan 27 14:52:57 compute-0 systemd-rc-local-generator[106760]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:52:57 compute-0 systemd-sysv-generator[106765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:52:58 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 27 14:52:58 compute-0 systemd[1]: Started libcrun container.
Jan 27 14:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2645ef2430b41311bf41e2c4a523ef84cdf240c75e963bbea2f4c8d7633b5be4/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 27 14:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2645ef2430b41311bf41e2c4a523ef84cdf240c75e963bbea2f4c8d7633b5be4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 14:52:58 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d.
Jan 27 14:52:58 compute-0 podman[106773]: 2026-01-27 14:52:58.261846049 +0000 UTC m=+0.148714410 container init ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: + sudo -E kolla_set_configs
Jan 27 14:52:58 compute-0 podman[106773]: 2026-01-27 14:52:58.283900336 +0000 UTC m=+0.170768677 container start ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:52:58 compute-0 edpm-start-podman-container[106773]: ovn_metadata_agent
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: INFO:__main__:Validating config file
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: INFO:__main__:Copying service configuration files
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: INFO:__main__:Writing out command to execute
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: ++ cat /run_command
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: + CMD=neutron-ovn-metadata-agent
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: + ARGS=
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: + sudo kolla_copy_cacerts
Jan 27 14:52:58 compute-0 edpm-start-podman-container[106772]: Creating additional drop-in dependency for "ovn_metadata_agent" (ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d)
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: + [[ ! -n '' ]]
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: + . kolla_extend_start
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: Running command: 'neutron-ovn-metadata-agent'
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: + umask 0022
Jan 27 14:52:58 compute-0 ovn_metadata_agent[106788]: + exec neutron-ovn-metadata-agent
Jan 27 14:52:58 compute-0 systemd[1]: Reloading.
Jan 27 14:52:58 compute-0 podman[106795]: 2026-01-27 14:52:58.387495657 +0000 UTC m=+0.086511441 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 14:52:58 compute-0 systemd-rc-local-generator[106865]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:52:58 compute-0 systemd-sysv-generator[106868]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:52:58 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 27 14:52:58 compute-0 sudo[106730]: pam_unix(sudo:session): session closed for user root
Jan 27 14:52:59 compute-0 python3.9[107025]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.153 106793 INFO neutron.common.config [-] Logging enabled!
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.153 106793 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.153 106793 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.154 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.154 106793 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.154 106793 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.154 106793 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.154 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.154 106793 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.155 106793 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.155 106793 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.155 106793 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.155 106793 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.155 106793 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.155 106793 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.155 106793 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.155 106793 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.155 106793 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.156 106793 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.156 106793 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.156 106793 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.156 106793 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.156 106793 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.156 106793 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.156 106793 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.156 106793 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.157 106793 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.157 106793 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.157 106793 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.157 106793 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.157 106793 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.157 106793 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.157 106793 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.157 106793 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.157 106793 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.158 106793 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.158 106793 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.158 106793 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.158 106793 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.158 106793 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.158 106793 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.158 106793 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.158 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.158 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.159 106793 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.159 106793 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.159 106793 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.159 106793 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.159 106793 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.159 106793 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.159 106793 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.159 106793 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.159 106793 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.160 106793 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.160 106793 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.160 106793 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.160 106793 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.160 106793 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.160 106793 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.160 106793 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.160 106793 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.160 106793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.161 106793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.161 106793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.161 106793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.161 106793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.161 106793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.161 106793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.161 106793 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.161 106793 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.161 106793 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.162 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.162 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.162 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.162 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.162 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.162 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.162 106793 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.162 106793 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.162 106793 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.163 106793 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.163 106793 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.163 106793 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.163 106793 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.163 106793 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.163 106793 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.163 106793 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.163 106793 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.163 106793 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.164 106793 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.164 106793 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.164 106793 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.164 106793 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.164 106793 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.164 106793 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.164 106793 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.164 106793 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.164 106793 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.164 106793 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.164 106793 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.165 106793 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.165 106793 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.165 106793 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.165 106793 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.165 106793 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.165 106793 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.165 106793 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.165 106793 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.165 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.166 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.166 106793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.166 106793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.166 106793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.166 106793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.166 106793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.166 106793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.166 106793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.167 106793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.167 106793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.167 106793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.167 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.167 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.167 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.167 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.167 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.168 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.168 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.168 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.168 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.168 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.168 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.168 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.168 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.169 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.169 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.169 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.169 106793 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.169 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.169 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.169 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.170 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.170 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.170 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.170 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.170 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.170 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.170 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.170 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.170 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.171 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.171 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.171 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.171 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.171 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.171 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.171 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.171 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.172 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.172 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.172 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.172 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.172 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.172 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.172 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.172 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.173 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.173 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.173 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.173 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.173 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.173 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.174 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.174 106793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.174 106793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.174 106793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.174 106793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.174 106793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.174 106793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.175 106793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 sudo[107175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypglvdnrxlpgdhpsakmnvclrmniihwme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525579.8639863-484-213027707422853/AnsiballZ_stat.py'
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.175 106793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.175 106793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.175 106793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.175 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.175 106793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.175 106793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.176 106793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.176 106793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.176 106793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.176 106793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.176 106793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.176 106793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.176 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.176 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.176 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.177 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.177 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.177 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.177 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.177 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.177 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.177 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.177 106793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.178 106793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.178 106793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.178 106793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.178 106793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.178 106793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.178 106793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.178 106793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 sudo[107175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.178 106793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.178 106793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.179 106793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.179 106793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.179 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.179 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.179 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.179 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.179 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.179 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.179 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.180 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.180 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.180 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.180 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.180 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.180 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.180 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.180 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.180 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.181 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.181 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.181 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.181 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.181 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.181 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.181 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.181 106793 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.181 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.182 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.182 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.182 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.182 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.182 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.182 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.182 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.182 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.182 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.183 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.183 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.183 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.183 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.183 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.183 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.183 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.183 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.183 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.183 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.184 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.184 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.184 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.184 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.184 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.184 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.184 106793 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.184 106793 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.185 106793 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.185 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.185 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.185 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.185 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.185 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.185 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.185 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.185 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.186 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.186 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.186 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.186 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.186 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.186 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.186 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.186 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.186 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.187 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.187 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.187 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.187 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.187 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.187 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.187 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.187 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.187 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.188 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.188 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.188 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.188 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.188 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.188 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.188 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.188 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.189 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.189 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.189 106793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.189 106793 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.199 106793 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.199 106793 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.199 106793 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.200 106793 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.200 106793 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.211 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 320c7d4f-8b68-4343-92ac-19c792fa938e (UUID: 320c7d4f-8b68-4343-92ac-19c792fa938e) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.241 106793 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.242 106793 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.242 106793 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.242 106793 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.247 106793 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.252 106793 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.260 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '320c7d4f-8b68-4343-92ac-19c792fa938e'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], external_ids={}, name=320c7d4f-8b68-4343-92ac-19c792fa938e, nb_cfg_timestamp=1769525529239, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.261 106793 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fdbd70360d0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.262 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.262 106793 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.263 106793 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.263 106793 INFO oslo_service.service [-] Starting 1 workers
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.267 106793 DEBUG oslo_service.service [-] Started child 107178 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.270 107178 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-165413'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.271 106793 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpxu57i8ma/privsep.sock']
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.294 107178 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.295 107178 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.295 107178 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.299 107178 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.308 107178 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.323 107178 INFO eventlet.wsgi.server [-] (107178) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 27 14:53:00 compute-0 python3.9[107177]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:53:00 compute-0 sudo[107175]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:00 compute-0 sudo[107305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhgonltzrfwnxsjymayksyakyflqjpcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525579.8639863-484-213027707422853/AnsiballZ_copy.py'
Jan 27 14:53:00 compute-0 sudo[107305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:00 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.904 106793 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.905 106793 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpxu57i8ma/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.799 107308 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.804 107308 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.807 107308 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.807 107308 INFO oslo.privsep.daemon [-] privsep daemon running as pid 107308
Jan 27 14:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:00.907 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[8061dcfd-11fb-4ed5-a008-133efbf7d8b8]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:53:00 compute-0 python3.9[107307]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525579.8639863-484-213027707422853/.source.yaml _original_basename=.mxokh6ie follow=False checksum=7154bfdf7ce742a0519afae63e14aac6e7fceb35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:53:00 compute-0 sudo[107305]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:01 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:01.441 107308 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:53:01 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:01.441 107308 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:53:01 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:01.441 107308 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:53:01 compute-0 sshd-session[98574]: Connection closed by 192.168.122.30 port 51102
Jan 27 14:53:01 compute-0 sshd-session[98571]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:53:01 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Jan 27 14:53:01 compute-0 systemd[1]: session-22.scope: Consumed 34.706s CPU time.
Jan 27 14:53:01 compute-0 systemd-logind[820]: Session 22 logged out. Waiting for processes to exit.
Jan 27 14:53:01 compute-0 systemd-logind[820]: Removed session 22.
Jan 27 14:53:01 compute-0 podman[107337]: 2026-01-27 14:53:01.837714097 +0000 UTC m=+0.098282377 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 27 14:53:01 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:01.994 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2cf1a4-b9de-4bd2-a129-89c13df054cc]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 14:53:01 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:01.996 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, column=external_ids, values=({'neutron:ovn-metadata-id': '1c3a9e19-4c44-5d6d-8ca8-21122e7cb51d'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.024 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.035 106793 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.035 106793 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.035 106793 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.035 106793 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.035 106793 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.035 106793 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.035 106793 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.036 106793 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.036 106793 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.036 106793 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.036 106793 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.036 106793 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.036 106793 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.036 106793 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.037 106793 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.037 106793 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.037 106793 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.037 106793 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.037 106793 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.037 106793 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.038 106793 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.038 106793 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.038 106793 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.038 106793 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.038 106793 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.038 106793 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.038 106793 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.038 106793 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.039 106793 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.039 106793 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.039 106793 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.039 106793 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.039 106793 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.039 106793 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.040 106793 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.040 106793 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.040 106793 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.040 106793 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.040 106793 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.040 106793 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.040 106793 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.040 106793 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.041 106793 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.041 106793 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.041 106793 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.041 106793 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.041 106793 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.041 106793 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.041 106793 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.041 106793 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.041 106793 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.042 106793 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.042 106793 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.042 106793 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.042 106793 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.042 106793 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.042 106793 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.042 106793 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.042 106793 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.042 106793 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.042 106793 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.043 106793 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.043 106793 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.043 106793 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.043 106793 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.043 106793 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.043 106793 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.043 106793 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.043 106793 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.043 106793 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.044 106793 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.044 106793 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.044 106793 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.044 106793 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.044 106793 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.044 106793 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.044 106793 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.044 106793 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.044 106793 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.044 106793 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.045 106793 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.045 106793 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.045 106793 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.045 106793 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.045 106793 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.045 106793 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.045 106793 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.045 106793 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.045 106793 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.046 106793 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.046 106793 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.046 106793 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.046 106793 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.046 106793 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.046 106793 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.046 106793 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.046 106793 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.046 106793 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.046 106793 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.046 106793 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.047 106793 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.047 106793 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.047 106793 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.047 106793 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.047 106793 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.047 106793 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.047 106793 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.047 106793 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.047 106793 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.048 106793 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.048 106793 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.048 106793 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.048 106793 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.048 106793 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.048 106793 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.048 106793 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.048 106793 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.048 106793 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.049 106793 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.049 106793 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.049 106793 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.049 106793 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.049 106793 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.049 106793 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.049 106793 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.049 106793 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.049 106793 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.050 106793 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.050 106793 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.050 106793 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.050 106793 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.050 106793 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.050 106793 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.050 106793 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.050 106793 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.050 106793 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.051 106793 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.051 106793 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.051 106793 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.051 106793 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.051 106793 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.051 106793 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.051 106793 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.051 106793 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.051 106793 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.051 106793 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.052 106793 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.052 106793 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.052 106793 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.052 106793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.052 106793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.052 106793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.052 106793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.052 106793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.052 106793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.052 106793 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.053 106793 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.053 106793 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.053 106793 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.053 106793 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.053 106793 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.053 106793 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.053 106793 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.053 106793 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.053 106793 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.053 106793 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.054 106793 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.054 106793 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.054 106793 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.054 106793 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.054 106793 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.054 106793 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.054 106793 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.054 106793 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.054 106793 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.054 106793 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.055 106793 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.055 106793 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.055 106793 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.055 106793 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.055 106793 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.055 106793 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.055 106793 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.055 106793 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.055 106793 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.056 106793 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.056 106793 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.056 106793 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.056 106793 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.056 106793 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.056 106793 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.056 106793 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.057 106793 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.057 106793 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.057 106793 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.057 106793 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.057 106793 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.057 106793 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.057 106793 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.057 106793 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.057 106793 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.058 106793 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.058 106793 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.058 106793 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.058 106793 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.058 106793 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.058 106793 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.058 106793 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.058 106793 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.058 106793 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.058 106793 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.058 106793 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.059 106793 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.059 106793 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.059 106793 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.059 106793 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.059 106793 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.060 106793 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.060 106793 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.060 106793 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.060 106793 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.060 106793 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.060 106793 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.061 106793 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.061 106793 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.061 106793 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.061 106793 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.061 106793 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.061 106793 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.061 106793 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.061 106793 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.062 106793 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.062 106793 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.062 106793 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.062 106793 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.062 106793 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.062 106793 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.062 106793 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.062 106793 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.063 106793 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.063 106793 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.063 106793 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.063 106793 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.063 106793 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.063 106793 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.063 106793 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.064 106793 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.064 106793 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.064 106793 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.064 106793 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.064 106793 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.064 106793 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.064 106793 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.065 106793 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.065 106793 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.065 106793 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.065 106793 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.065 106793 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.065 106793 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.065 106793 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.065 106793 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.066 106793 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.066 106793 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.066 106793 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.066 106793 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.066 106793 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.066 106793 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.066 106793 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.067 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.067 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.067 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.067 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.067 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.067 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.067 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.068 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.068 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.068 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.068 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.068 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.068 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.068 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.068 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.069 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.069 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.069 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.069 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.069 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.069 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.069 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.070 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.070 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.070 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.070 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.070 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.070 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.070 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.070 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.071 106793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.071 106793 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.071 106793 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.071 106793 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.071 106793 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 14:53:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:53:02.071 106793 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 14:53:08 compute-0 sshd-session[107364]: Accepted publickey for zuul from 192.168.122.30 port 57848 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:53:08 compute-0 systemd-logind[820]: New session 23 of user zuul.
Jan 27 14:53:08 compute-0 systemd[1]: Started Session 23 of User zuul.
Jan 27 14:53:08 compute-0 sshd-session[107364]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:53:09 compute-0 python3.9[107517]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:53:11 compute-0 sudo[107671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjyiplfnymfvywgxvxgnlttzsmejbvgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525590.3436306-29-189275290062400/AnsiballZ_command.py'
Jan 27 14:53:11 compute-0 sudo[107671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:11 compute-0 python3.9[107673]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:53:11 compute-0 sudo[107671]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:12 compute-0 sudo[107834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkjhcgergapvuwnyerrwloaekyforavg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525591.6881807-40-180900107562123/AnsiballZ_systemd_service.py'
Jan 27 14:53:12 compute-0 sudo[107834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:12 compute-0 python3.9[107836]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 14:53:12 compute-0 systemd[1]: Reloading.
Jan 27 14:53:12 compute-0 systemd-sysv-generator[107864]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:53:12 compute-0 systemd-rc-local-generator[107861]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:53:12 compute-0 sudo[107834]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:13 compute-0 python3.9[108021]: ansible-ansible.builtin.service_facts Invoked
Jan 27 14:53:13 compute-0 network[108038]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 14:53:13 compute-0 network[108039]: 'network-scripts' will be removed from distribution in near future.
Jan 27 14:53:13 compute-0 network[108040]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 14:53:18 compute-0 sudo[108299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geokruvjuldfhoffyhifldzihbkevlec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525598.4946222-59-131764294193631/AnsiballZ_systemd_service.py'
Jan 27 14:53:18 compute-0 sudo[108299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:19 compute-0 python3.9[108301]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:53:19 compute-0 sudo[108299]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:19 compute-0 sudo[108452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygtvblwlfmdhtefccfonqsosrukkyzgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525599.230449-59-279289384345454/AnsiballZ_systemd_service.py'
Jan 27 14:53:19 compute-0 sudo[108452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:19 compute-0 python3.9[108454]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:53:19 compute-0 sudo[108452]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:20 compute-0 sudo[108605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiyowztysczbutvtgxpugllszjcuxjqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525600.0062656-59-200555222281954/AnsiballZ_systemd_service.py'
Jan 27 14:53:20 compute-0 sudo[108605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:20 compute-0 python3.9[108607]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:53:20 compute-0 sudo[108605]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:21 compute-0 sudo[108758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zofvwdxtmygeukzpnnciwhyvyhfeinwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525600.7391703-59-169244345111964/AnsiballZ_systemd_service.py'
Jan 27 14:53:21 compute-0 sudo[108758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:21 compute-0 python3.9[108760]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:53:21 compute-0 sudo[108758]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:21 compute-0 sudo[108911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvcshcnzmvwobfnmpuuneryewrnyufse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525601.4773028-59-108038227805746/AnsiballZ_systemd_service.py'
Jan 27 14:53:21 compute-0 sudo[108911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:22 compute-0 python3.9[108913]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:53:22 compute-0 sudo[108911]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:22 compute-0 sudo[109064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnlyqehxetjhhlzsgxajknimnmshoadg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525602.2513187-59-249436545750188/AnsiballZ_systemd_service.py'
Jan 27 14:53:22 compute-0 sudo[109064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:22 compute-0 python3.9[109066]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:53:22 compute-0 sudo[109064]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:23 compute-0 sudo[109217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzngxiwrhipbowefwkyyqjoboqahijis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525602.9566858-59-258467376153500/AnsiballZ_systemd_service.py'
Jan 27 14:53:23 compute-0 sudo[109217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:23 compute-0 python3.9[109219]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:53:23 compute-0 sudo[109217]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:24 compute-0 sudo[109370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcjapobfbinqdceorcybtcfrlejvkzeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525603.8542538-111-218989897138095/AnsiballZ_file.py'
Jan 27 14:53:24 compute-0 sudo[109370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:24 compute-0 python3.9[109372]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:53:24 compute-0 sudo[109370]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:24 compute-0 sudo[109522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvvpgyxemhtxinzjufhfbplwfxrjzums ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525604.6322832-111-116822226848788/AnsiballZ_file.py'
Jan 27 14:53:24 compute-0 sudo[109522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:25 compute-0 python3.9[109524]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:53:25 compute-0 sudo[109522]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:25 compute-0 sudo[109674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pthfwkdtxnxuibxcckaylgputitmacfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525605.4383972-111-72848245958310/AnsiballZ_file.py'
Jan 27 14:53:25 compute-0 sudo[109674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:25 compute-0 python3.9[109676]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:53:25 compute-0 sudo[109674]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:26 compute-0 sudo[109826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtxsqtckuqjahrkypxdvgnkhjuyjwtpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525606.0497787-111-98581591929297/AnsiballZ_file.py'
Jan 27 14:53:26 compute-0 sudo[109826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:26 compute-0 python3.9[109828]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:53:26 compute-0 sudo[109826]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:26 compute-0 sudo[109978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbotvbzkfpjaimobftsvepdoyrsnycsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525606.647072-111-96481987971929/AnsiballZ_file.py'
Jan 27 14:53:26 compute-0 sudo[109978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:27 compute-0 python3.9[109980]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:53:27 compute-0 sudo[109978]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:27 compute-0 sudo[110130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgruyccimrkyezzhgsicbxpmbaqwkpci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525607.2233074-111-274418033416339/AnsiballZ_file.py'
Jan 27 14:53:27 compute-0 sudo[110130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:27 compute-0 python3.9[110132]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:53:27 compute-0 sudo[110130]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:28 compute-0 sudo[110282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvhozrsrcfhezkfdnzaqdzstecdierjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525607.7963116-111-36892868394511/AnsiballZ_file.py'
Jan 27 14:53:28 compute-0 sudo[110282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:28 compute-0 python3.9[110284]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:53:28 compute-0 sudo[110282]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:28 compute-0 sudo[110448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmkiflthopfgjphqqkzdsdlbnyaecuuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525608.4720192-161-275366158201845/AnsiballZ_file.py'
Jan 27 14:53:28 compute-0 sudo[110448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:28 compute-0 podman[110408]: 2026-01-27 14:53:28.758799334 +0000 UTC m=+0.063752477 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:53:28 compute-0 python3.9[110455]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:53:28 compute-0 sudo[110448]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:29 compute-0 sudo[110607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gruxflkjzmenujtaoajotpvpflfgnsnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525609.0611935-161-61185928861582/AnsiballZ_file.py'
Jan 27 14:53:29 compute-0 sudo[110607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:29 compute-0 python3.9[110609]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:53:29 compute-0 sudo[110607]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:29 compute-0 sudo[110759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwegqiawgnyhzzpyrrtrzckcugjhfieg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525609.6788402-161-96288129531219/AnsiballZ_file.py'
Jan 27 14:53:29 compute-0 sudo[110759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:30 compute-0 python3.9[110761]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:53:30 compute-0 sudo[110759]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:30 compute-0 sudo[110911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdmpmlbusfwrswehxribabrugepbnvfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525610.2856603-161-96315224044965/AnsiballZ_file.py'
Jan 27 14:53:30 compute-0 sudo[110911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:30 compute-0 python3.9[110913]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:53:30 compute-0 sudo[110911]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:31 compute-0 sudo[111063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qopsarhdcxoalqyqsmpnsabpwlfpexuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525610.88483-161-104858998401117/AnsiballZ_file.py'
Jan 27 14:53:31 compute-0 sudo[111063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:31 compute-0 python3.9[111065]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:53:31 compute-0 sudo[111063]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:31 compute-0 sudo[111215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjekriqkowcaacwxdgeeilahhwbjiohh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525611.4330728-161-73331738130968/AnsiballZ_file.py'
Jan 27 14:53:31 compute-0 sudo[111215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:31 compute-0 python3.9[111217]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:53:31 compute-0 sudo[111215]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:32 compute-0 sudo[111377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzfybnyicflbpppxcpezneqjqmwqelnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525612.0387957-161-124437288427366/AnsiballZ_file.py'
Jan 27 14:53:32 compute-0 sudo[111377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:32 compute-0 podman[111341]: 2026-01-27 14:53:32.331511598 +0000 UTC m=+0.086622144 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:53:32 compute-0 python3.9[111384]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:53:32 compute-0 sudo[111377]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:32 compute-0 sudo[111544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjekjpejiuhhhrmjgazringsoggoovym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525612.6837807-212-263623746796664/AnsiballZ_command.py'
Jan 27 14:53:32 compute-0 sudo[111544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:33 compute-0 python3.9[111546]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:53:33 compute-0 sudo[111544]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:33 compute-0 python3.9[111698]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 14:53:34 compute-0 sudo[111848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnsodpjxezlvsjsbkjniyjwdrumcmtte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525614.229441-230-3077474090574/AnsiballZ_systemd_service.py'
Jan 27 14:53:34 compute-0 sudo[111848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:34 compute-0 python3.9[111850]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 14:53:34 compute-0 systemd[1]: Reloading.
Jan 27 14:53:34 compute-0 systemd-sysv-generator[111881]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:53:34 compute-0 systemd-rc-local-generator[111878]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:53:35 compute-0 sudo[111848]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:35 compute-0 sudo[112035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeedqgojeatrqjqhcmjfblamgddxaqmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525615.2691147-238-125303953903432/AnsiballZ_command.py'
Jan 27 14:53:35 compute-0 sudo[112035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:35 compute-0 python3.9[112037]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:53:35 compute-0 sudo[112035]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:36 compute-0 sudo[112188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntqtxjrftkkvdaswviutoomnwkuwohxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525615.8856454-238-39753529618707/AnsiballZ_command.py'
Jan 27 14:53:36 compute-0 sudo[112188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:36 compute-0 python3.9[112190]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:53:36 compute-0 sudo[112188]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:36 compute-0 sudo[112341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pohpmwwkbtkizmndvvhqcwhkgrpvxuef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525616.4819162-238-265159074380775/AnsiballZ_command.py'
Jan 27 14:53:36 compute-0 sudo[112341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:36 compute-0 python3.9[112343]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:53:37 compute-0 sudo[112341]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:37 compute-0 sudo[112494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyjkjypkfnyernevjfnaexxibtwjxcrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525617.1345158-238-205694268170533/AnsiballZ_command.py'
Jan 27 14:53:37 compute-0 sudo[112494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:37 compute-0 python3.9[112496]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:53:37 compute-0 sudo[112494]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:38 compute-0 sudo[112647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghioqtxzwwpgcpbhovruwceddcxomwya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525617.791389-238-181614301273611/AnsiballZ_command.py'
Jan 27 14:53:38 compute-0 sudo[112647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:38 compute-0 python3.9[112649]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:53:38 compute-0 sudo[112647]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:38 compute-0 sudo[112800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmnkkmwqkyhrqbywmrnecpfodafnubsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525618.6501584-238-167990964204257/AnsiballZ_command.py'
Jan 27 14:53:38 compute-0 sudo[112800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:39 compute-0 python3.9[112802]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:53:39 compute-0 sudo[112800]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:39 compute-0 sudo[112953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nydxothmzpjwdjiduiumystpeyavcpti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525619.2860305-238-79952330847490/AnsiballZ_command.py'
Jan 27 14:53:39 compute-0 sudo[112953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:39 compute-0 python3.9[112955]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:53:39 compute-0 sudo[112953]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:40 compute-0 sudo[113106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkelkeopnicsrintpfofjpapkweybjyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525620.1667662-292-53844907546572/AnsiballZ_getent.py'
Jan 27 14:53:40 compute-0 sudo[113106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:40 compute-0 python3.9[113108]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 27 14:53:40 compute-0 sudo[113106]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:41 compute-0 sudo[113259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oblykpsifikugjoalvbqxfgztlshzgbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525620.990623-300-174589516585190/AnsiballZ_group.py'
Jan 27 14:53:41 compute-0 sudo[113259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:41 compute-0 python3.9[113261]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 14:53:41 compute-0 groupadd[113262]: group added to /etc/group: name=libvirt, GID=42473
Jan 27 14:53:41 compute-0 groupadd[113262]: group added to /etc/gshadow: name=libvirt
Jan 27 14:53:41 compute-0 groupadd[113262]: new group: name=libvirt, GID=42473
Jan 27 14:53:41 compute-0 sudo[113259]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:42 compute-0 sudo[113417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haxtmezeigbmyzovllwehlbxhzigzjih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525621.9641044-308-7307239928774/AnsiballZ_user.py'
Jan 27 14:53:42 compute-0 sudo[113417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:42 compute-0 python3.9[113419]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 14:53:42 compute-0 useradd[113421]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 27 14:53:42 compute-0 sudo[113417]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:43 compute-0 sudo[113577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azphlemmuelvxvolysqpctkgcopypnrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525623.235412-319-178560670913461/AnsiballZ_setup.py'
Jan 27 14:53:43 compute-0 sudo[113577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:44 compute-0 python3.9[113579]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 14:53:44 compute-0 sudo[113577]: pam_unix(sudo:session): session closed for user root
Jan 27 14:53:44 compute-0 sudo[113661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ledhympngerutbyhplylrmkohvyrbasb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525623.235412-319-178560670913461/AnsiballZ_dnf.py'
Jan 27 14:53:44 compute-0 sudo[113661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:53:45 compute-0 python3.9[113663]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:53:59 compute-0 podman[113847]: 2026-01-27 14:53:59.301497564 +0000 UTC m=+0.062603544 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 14:54:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:54:00.201 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:54:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:54:00.202 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:54:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:54:00.202 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:54:03 compute-0 podman[113867]: 2026-01-27 14:54:03.325985351 +0000 UTC m=+0.078936318 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 14:54:22 compute-0 kernel: SELinux:  Converting 2764 SID table entries...
Jan 27 14:54:22 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 14:54:22 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 14:54:22 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 14:54:22 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 14:54:22 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 14:54:22 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 14:54:22 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 14:54:30 compute-0 dbus-broker-launch[811]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 27 14:54:30 compute-0 podman[113908]: 2026-01-27 14:54:30.328442354 +0000 UTC m=+0.070444031 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 27 14:54:33 compute-0 kernel: SELinux:  Converting 2764 SID table entries...
Jan 27 14:54:33 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 14:54:33 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 14:54:33 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 14:54:33 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 14:54:33 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 14:54:33 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 14:54:33 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 14:54:34 compute-0 dbus-broker-launch[811]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 27 14:54:34 compute-0 podman[113932]: 2026-01-27 14:54:34.336324199 +0000 UTC m=+0.089354019 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:55:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:55:00.202 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:55:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:55:00.203 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:55:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:55:00.203 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:55:01 compute-0 podman[123985]: 2026-01-27 14:55:01.286678274 +0000 UTC m=+0.047856919 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:55:05 compute-0 podman[126854]: 2026-01-27 14:55:05.348703747 +0000 UTC m=+0.109098011 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 14:55:26 compute-0 kernel: SELinux:  Converting 2765 SID table entries...
Jan 27 14:55:26 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 14:55:26 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 14:55:26 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 14:55:26 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 14:55:26 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 14:55:26 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 14:55:26 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 14:55:28 compute-0 groupadd[130883]: group added to /etc/group: name=dnsmasq, GID=993
Jan 27 14:55:28 compute-0 groupadd[130883]: group added to /etc/gshadow: name=dnsmasq
Jan 27 14:55:28 compute-0 groupadd[130883]: new group: name=dnsmasq, GID=993
Jan 27 14:55:28 compute-0 useradd[130890]: new user: name=dnsmasq, UID=992, GID=993, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 27 14:55:28 compute-0 dbus-broker-launch[810]: Noticed file-system modification, trigger reload.
Jan 27 14:55:28 compute-0 dbus-broker-launch[811]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 27 14:55:28 compute-0 dbus-broker-launch[810]: Noticed file-system modification, trigger reload.
Jan 27 14:55:29 compute-0 groupadd[130903]: group added to /etc/group: name=clevis, GID=992
Jan 27 14:55:29 compute-0 groupadd[130903]: group added to /etc/gshadow: name=clevis
Jan 27 14:55:29 compute-0 groupadd[130903]: new group: name=clevis, GID=992
Jan 27 14:55:29 compute-0 useradd[130910]: new user: name=clevis, UID=991, GID=992, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 27 14:55:29 compute-0 usermod[130920]: add 'clevis' to group 'tss'
Jan 27 14:55:29 compute-0 usermod[130920]: add 'clevis' to shadow group 'tss'
Jan 27 14:55:31 compute-0 podman[130941]: 2026-01-27 14:55:31.461487149 +0000 UTC m=+0.056236126 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 14:55:33 compute-0 polkitd[43538]: Reloading rules
Jan 27 14:55:33 compute-0 polkitd[43538]: Collecting garbage unconditionally...
Jan 27 14:55:33 compute-0 polkitd[43538]: Loading rules from directory /etc/polkit-1/rules.d
Jan 27 14:55:33 compute-0 polkitd[43538]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 27 14:55:33 compute-0 polkitd[43538]: Finished loading, compiling and executing 3 rules
Jan 27 14:55:33 compute-0 polkitd[43538]: Reloading rules
Jan 27 14:55:33 compute-0 polkitd[43538]: Collecting garbage unconditionally...
Jan 27 14:55:33 compute-0 polkitd[43538]: Loading rules from directory /etc/polkit-1/rules.d
Jan 27 14:55:33 compute-0 polkitd[43538]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 27 14:55:33 compute-0 polkitd[43538]: Finished loading, compiling and executing 3 rules
Jan 27 14:55:35 compute-0 groupadd[131129]: group added to /etc/group: name=ceph, GID=167
Jan 27 14:55:35 compute-0 groupadd[131129]: group added to /etc/gshadow: name=ceph
Jan 27 14:55:35 compute-0 groupadd[131129]: new group: name=ceph, GID=167
Jan 27 14:55:35 compute-0 useradd[131135]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 27 14:55:36 compute-0 podman[131142]: 2026-01-27 14:55:36.326324488 +0000 UTC m=+0.081369867 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 14:55:38 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 27 14:55:38 compute-0 sshd[1007]: Received signal 15; terminating.
Jan 27 14:55:38 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 27 14:55:38 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 27 14:55:38 compute-0 systemd[1]: sshd.service: Consumed 1.284s CPU time, read 32.0K from disk, written 0B to disk.
Jan 27 14:55:38 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 27 14:55:38 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 27 14:55:38 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 14:55:38 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 14:55:38 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 14:55:38 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 27 14:55:38 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 27 14:55:38 compute-0 sshd[131680]: Server listening on 0.0.0.0 port 22.
Jan 27 14:55:38 compute-0 sshd[131680]: Server listening on :: port 22.
Jan 27 14:55:38 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 27 14:55:39 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 14:55:39 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 14:55:39 compute-0 systemd[1]: Reloading.
Jan 27 14:55:40 compute-0 systemd-rc-local-generator[131939]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:55:40 compute-0 systemd-sysv-generator[131943]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:55:40 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 14:55:42 compute-0 sudo[113661]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:43 compute-0 sudo[136092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-encmkanpexqokesmdimfwwdmwhzesvmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525742.845664-331-100029865239145/AnsiballZ_systemd.py'
Jan 27 14:55:43 compute-0 sudo[136092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:55:43 compute-0 python3.9[136119]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 14:55:44 compute-0 systemd[1]: Reloading.
Jan 27 14:55:44 compute-0 systemd-rc-local-generator[136498]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:55:44 compute-0 systemd-sysv-generator[136514]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:55:44 compute-0 sudo[136092]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:44 compute-0 sudo[137353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvmwebfrhqceelyrwfixvzfghbvnmdbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525744.5055835-331-176826940290298/AnsiballZ_systemd.py'
Jan 27 14:55:44 compute-0 sudo[137353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:55:45 compute-0 python3.9[137385]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 14:55:45 compute-0 systemd[1]: Reloading.
Jan 27 14:55:45 compute-0 systemd-sysv-generator[137815]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:55:45 compute-0 systemd-rc-local-generator[137811]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:55:45 compute-0 sudo[137353]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:45 compute-0 sudo[138482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tovcriyeitecesfrnmfpeejsybcivjcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525745.5713344-331-273063700498581/AnsiballZ_systemd.py'
Jan 27 14:55:45 compute-0 sudo[138482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:55:46 compute-0 python3.9[138484]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 14:55:46 compute-0 systemd[1]: Reloading.
Jan 27 14:55:46 compute-0 systemd-rc-local-generator[138880]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:55:46 compute-0 systemd-sysv-generator[138885]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:55:46 compute-0 sudo[138482]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:46 compute-0 sudo[139693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viseubvjjminbegmuoonikfcqcbvotwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525746.597623-331-140466870737135/AnsiballZ_systemd.py'
Jan 27 14:55:46 compute-0 sudo[139693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:55:47 compute-0 python3.9[139718]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 14:55:47 compute-0 systemd[1]: Reloading.
Jan 27 14:55:47 compute-0 systemd-rc-local-generator[140192]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:55:47 compute-0 systemd-sysv-generator[140196]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:55:47 compute-0 sudo[139693]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:48 compute-0 sudo[140828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkofgiccutejzavlktjftsykqniiyyqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525747.7137525-360-97351187101633/AnsiballZ_systemd.py'
Jan 27 14:55:48 compute-0 sudo[140828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:55:48 compute-0 python3.9[140851]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:55:48 compute-0 systemd[1]: Reloading.
Jan 27 14:55:48 compute-0 systemd-rc-local-generator[141151]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:55:48 compute-0 systemd-sysv-generator[141158]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:55:48 compute-0 sudo[140828]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:49 compute-0 sudo[141415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsozqmtjggxcniwyuvyukxqfbkpeiyuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525748.8933232-360-185637751134673/AnsiballZ_systemd.py'
Jan 27 14:55:49 compute-0 sudo[141415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:55:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 14:55:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 14:55:49 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.557s CPU time.
Jan 27 14:55:49 compute-0 systemd[1]: run-r2d47cfdc1c164de7aad4edb08f209e42.service: Deactivated successfully.
Jan 27 14:55:49 compute-0 python3.9[141417]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:55:49 compute-0 systemd[1]: Reloading.
Jan 27 14:55:49 compute-0 systemd-rc-local-generator[141450]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:55:49 compute-0 systemd-sysv-generator[141453]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:55:49 compute-0 sudo[141415]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:50 compute-0 sudo[141607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynvkiiteiyjbibgsmyvjbzgzjglqtart ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525749.978077-360-38076246633243/AnsiballZ_systemd.py'
Jan 27 14:55:50 compute-0 sudo[141607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:55:50 compute-0 python3.9[141609]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:55:50 compute-0 systemd[1]: Reloading.
Jan 27 14:55:50 compute-0 systemd-rc-local-generator[141640]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:55:50 compute-0 systemd-sysv-generator[141643]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:55:51 compute-0 sudo[141607]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:51 compute-0 sudo[141798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwcdcaklcdgpqloraeccltugknovxkdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525751.1281705-360-148774110021898/AnsiballZ_systemd.py'
Jan 27 14:55:51 compute-0 sudo[141798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:55:51 compute-0 python3.9[141800]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:55:51 compute-0 sudo[141798]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:52 compute-0 sudo[141953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kadlfgccnkxpozjeecqywozbaoeueots ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525751.9434993-360-41290071042214/AnsiballZ_systemd.py'
Jan 27 14:55:52 compute-0 sudo[141953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:55:52 compute-0 python3.9[141955]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:55:52 compute-0 systemd[1]: Reloading.
Jan 27 14:55:52 compute-0 systemd-rc-local-generator[141986]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:55:52 compute-0 systemd-sysv-generator[141991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:55:52 compute-0 sudo[141953]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:53 compute-0 sudo[142144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqwcdfwicejjoehbjzjgisalcseyusha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525753.1393292-396-228898642820137/AnsiballZ_systemd.py'
Jan 27 14:55:53 compute-0 sudo[142144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:55:53 compute-0 python3.9[142146]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 14:55:53 compute-0 systemd[1]: Reloading.
Jan 27 14:55:53 compute-0 systemd-rc-local-generator[142177]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:55:53 compute-0 systemd-sysv-generator[142181]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:55:54 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 27 14:55:54 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 27 14:55:54 compute-0 sudo[142144]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:54 compute-0 sudo[142338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrijpxyhidhwmxdfackpumttzvbesfwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525754.348275-404-67970341814688/AnsiballZ_systemd.py'
Jan 27 14:55:54 compute-0 sudo[142338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:55:54 compute-0 python3.9[142340]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:55:54 compute-0 sudo[142338]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:55 compute-0 sudo[142493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuphakovdiuktbzhbedgcxfxxvmbslkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525755.1121848-404-89447521876228/AnsiballZ_systemd.py'
Jan 27 14:55:55 compute-0 sudo[142493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:55:55 compute-0 python3.9[142495]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:55:55 compute-0 sudo[142493]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:56 compute-0 sudo[142648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umghcmlwkcamqpodmxhjpsfkppibgemf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525755.8749926-404-81119688590749/AnsiballZ_systemd.py'
Jan 27 14:55:56 compute-0 sudo[142648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:55:56 compute-0 python3.9[142650]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:55:56 compute-0 sudo[142648]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:57 compute-0 sudo[142803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbiusiqzqqwwhyirhdoafgvlyvnbcvoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525756.8760803-404-121384954135310/AnsiballZ_systemd.py'
Jan 27 14:55:57 compute-0 sudo[142803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:55:57 compute-0 python3.9[142805]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:55:57 compute-0 sudo[142803]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:58 compute-0 sudo[142958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxobyjlklongpndtgmimglljconazczc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525757.6752787-404-278149993100280/AnsiballZ_systemd.py'
Jan 27 14:55:58 compute-0 sudo[142958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:55:58 compute-0 python3.9[142960]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:55:58 compute-0 sudo[142958]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:58 compute-0 sudo[143113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pylymufnqvogpzoezcoqsangqlzfvqsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525758.5730577-404-30738870417948/AnsiballZ_systemd.py'
Jan 27 14:55:58 compute-0 sudo[143113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:55:59 compute-0 python3.9[143115]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:55:59 compute-0 sudo[143113]: pam_unix(sudo:session): session closed for user root
Jan 27 14:55:59 compute-0 sudo[143268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voqwjqjbxkvgikiciizldoxmpxuncrhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525759.6051755-404-222729894624433/AnsiballZ_systemd.py'
Jan 27 14:55:59 compute-0 sudo[143268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:00 compute-0 python3.9[143270]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:56:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:56:00.204 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:56:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:56:00.206 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:56:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:56:00.206 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:56:00 compute-0 sudo[143268]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:00 compute-0 sudo[143423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsijslmjtsuquekuupunpqiskkbuniru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525760.3834205-404-140784609727681/AnsiballZ_systemd.py'
Jan 27 14:56:00 compute-0 sudo[143423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:00 compute-0 python3.9[143425]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:56:01 compute-0 sudo[143423]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:01 compute-0 sudo[143578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvgsgyniptnpphzqybhuqofysebeewgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525761.187282-404-83863558531131/AnsiballZ_systemd.py'
Jan 27 14:56:01 compute-0 sudo[143578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:01 compute-0 python3.9[143580]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:56:01 compute-0 podman[143581]: 2026-01-27 14:56:01.792899439 +0000 UTC m=+0.052522541 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 14:56:01 compute-0 sudo[143578]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:02 compute-0 sudo[143752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ystbjtqskbiqayijqoasgskqbvayftbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525761.962696-404-186887670061258/AnsiballZ_systemd.py'
Jan 27 14:56:02 compute-0 sudo[143752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:02 compute-0 python3.9[143754]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:56:02 compute-0 sudo[143752]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:02 compute-0 sudo[143907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsdhzhfuraorqrvjpoqfwoidarqfqgvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525762.7198615-404-70742944406002/AnsiballZ_systemd.py'
Jan 27 14:56:02 compute-0 sudo[143907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:03 compute-0 python3.9[143909]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:56:03 compute-0 sudo[143907]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:03 compute-0 sudo[144062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkxrklcnarfqqjlawmzrarxtiuoifehs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525763.5691168-404-7625535516544/AnsiballZ_systemd.py'
Jan 27 14:56:03 compute-0 sudo[144062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:04 compute-0 python3.9[144064]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:56:04 compute-0 sudo[144062]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:04 compute-0 sudo[144217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoiwgvextsdthhcdpkknztddyqqglukh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525764.321269-404-87737975575658/AnsiballZ_systemd.py'
Jan 27 14:56:04 compute-0 sudo[144217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:04 compute-0 python3.9[144219]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:56:04 compute-0 sudo[144217]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:05 compute-0 sudo[144372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhhimyxcjrwyhybinykrjzcizgnrpdng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525765.0884166-404-186298856159192/AnsiballZ_systemd.py'
Jan 27 14:56:05 compute-0 sudo[144372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:05 compute-0 python3.9[144374]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 14:56:05 compute-0 sudo[144372]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:06 compute-0 sudo[144544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmfkfcyrkvrbnuhledjneztsuqkrsdwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525766.1071658-506-51826853348260/AnsiballZ_file.py'
Jan 27 14:56:06 compute-0 sudo[144544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:06 compute-0 podman[144501]: 2026-01-27 14:56:06.470509161 +0000 UTC m=+0.106433697 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 14:56:06 compute-0 python3.9[144551]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:56:06 compute-0 sudo[144544]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:07 compute-0 sudo[144705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roumxknemzjghntxhqqhyncpouyxauqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525766.7776568-506-146795723406637/AnsiballZ_file.py'
Jan 27 14:56:07 compute-0 sudo[144705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:07 compute-0 python3.9[144707]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:56:07 compute-0 sudo[144705]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:07 compute-0 sudo[144857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmutneqngdzojwbvxnahkgsxvukczhdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525767.5453677-506-265037647295097/AnsiballZ_file.py'
Jan 27 14:56:07 compute-0 sudo[144857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:08 compute-0 python3.9[144859]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:56:08 compute-0 sudo[144857]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:08 compute-0 sudo[145009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjxyxcbhyqvkccgdarmwxyxunctalbyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525768.3515491-506-144117384744798/AnsiballZ_file.py'
Jan 27 14:56:08 compute-0 sudo[145009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:08 compute-0 python3.9[145011]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:56:08 compute-0 sudo[145009]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:09 compute-0 sudo[145161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icminftozvcjezmxjpnwddifydjcuvbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525769.0362842-506-69871788215900/AnsiballZ_file.py'
Jan 27 14:56:09 compute-0 sudo[145161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:09 compute-0 python3.9[145163]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:56:09 compute-0 sudo[145161]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:10 compute-0 sudo[145313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iokyxirvvydbntxxicnmngwqjzwxakzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525769.7335947-506-111813425020719/AnsiballZ_file.py'
Jan 27 14:56:10 compute-0 sudo[145313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:10 compute-0 python3.9[145315]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:56:10 compute-0 sudo[145313]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:11 compute-0 python3.9[145465]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:56:11 compute-0 sudo[145615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tllbzxzovypzdsdrdhutvcqmyulrbgiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525771.2999566-557-211665022343746/AnsiballZ_stat.py'
Jan 27 14:56:11 compute-0 sudo[145615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:11 compute-0 python3.9[145617]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:12 compute-0 sudo[145615]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:12 compute-0 sudo[145741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyxktipxidxeebxyqqwfhruupmpxxsqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525771.2999566-557-211665022343746/AnsiballZ_copy.py'
Jan 27 14:56:12 compute-0 sudo[145741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:12 compute-0 python3.9[145743]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769525771.2999566-557-211665022343746/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:12 compute-0 sudo[145741]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:13 compute-0 sudo[145893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htdijvrofdprhkffyxqfkmycodjisesj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525772.9764178-557-3049931262125/AnsiballZ_stat.py'
Jan 27 14:56:13 compute-0 sudo[145893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:13 compute-0 python3.9[145895]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:13 compute-0 sudo[145893]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:13 compute-0 sudo[146018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hipxvfjeprbtuuumzfigoyvjpdiapyjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525772.9764178-557-3049931262125/AnsiballZ_copy.py'
Jan 27 14:56:13 compute-0 sudo[146018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:14 compute-0 python3.9[146020]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769525772.9764178-557-3049931262125/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:14 compute-0 sudo[146018]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:14 compute-0 sudo[146170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgqjcjupajjcljkpevgtmyhcblfgodul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525774.1479464-557-106452814971855/AnsiballZ_stat.py'
Jan 27 14:56:14 compute-0 sudo[146170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:14 compute-0 python3.9[146172]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:14 compute-0 sudo[146170]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:14 compute-0 sudo[146295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umpmrplcuudqpifwajwiiumkmynmincj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525774.1479464-557-106452814971855/AnsiballZ_copy.py'
Jan 27 14:56:14 compute-0 sudo[146295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:15 compute-0 python3.9[146297]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769525774.1479464-557-106452814971855/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:15 compute-0 sudo[146295]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:15 compute-0 sudo[146447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opaqhmcfqltljrnwxakkvanmircpxqrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525775.478237-557-143226905701285/AnsiballZ_stat.py'
Jan 27 14:56:15 compute-0 sudo[146447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:15 compute-0 python3.9[146449]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:16 compute-0 sudo[146447]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:16 compute-0 sudo[146572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdusexbmllepfgoxmomiyfphctafeonk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525775.478237-557-143226905701285/AnsiballZ_copy.py'
Jan 27 14:56:16 compute-0 sudo[146572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:16 compute-0 python3.9[146574]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769525775.478237-557-143226905701285/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:16 compute-0 sudo[146572]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:17 compute-0 sudo[146724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcdhechhnjehxbmofsevugjkphnlninc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525776.8542693-557-200179690069795/AnsiballZ_stat.py'
Jan 27 14:56:17 compute-0 sudo[146724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:17 compute-0 python3.9[146726]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:17 compute-0 sudo[146724]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:17 compute-0 sudo[146849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcheiztcrsuxptepvnvlfxksqiobqnjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525776.8542693-557-200179690069795/AnsiballZ_copy.py'
Jan 27 14:56:17 compute-0 sudo[146849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:18 compute-0 python3.9[146851]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769525776.8542693-557-200179690069795/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:18 compute-0 sudo[146849]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:18 compute-0 sudo[147001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwgqevaoibmuyutpullssaaxdswlyeqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525778.191424-557-95108928462725/AnsiballZ_stat.py'
Jan 27 14:56:18 compute-0 sudo[147001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:18 compute-0 python3.9[147003]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:18 compute-0 sudo[147001]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:19 compute-0 sudo[147126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbbcrrkeokrmyvrratmtvlkyrfaraxiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525778.191424-557-95108928462725/AnsiballZ_copy.py'
Jan 27 14:56:19 compute-0 sudo[147126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:19 compute-0 python3.9[147128]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769525778.191424-557-95108928462725/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:19 compute-0 sudo[147126]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:19 compute-0 sudo[147278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abortggdwwfbhhrkqqhidcdymmwbbifg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525779.398444-557-227981686273307/AnsiballZ_stat.py'
Jan 27 14:56:19 compute-0 sudo[147278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:19 compute-0 python3.9[147280]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:19 compute-0 sudo[147278]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:20 compute-0 sudo[147401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eluqxnvxcfkzknrydhoicxgkqqyitykk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525779.398444-557-227981686273307/AnsiballZ_copy.py'
Jan 27 14:56:20 compute-0 sudo[147401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:20 compute-0 python3.9[147403]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769525779.398444-557-227981686273307/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:20 compute-0 sudo[147401]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:20 compute-0 sudo[147553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqymekxqrdpwmtqczkvslenwaddemmwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525780.6136906-557-10430076718421/AnsiballZ_stat.py'
Jan 27 14:56:20 compute-0 sudo[147553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:21 compute-0 python3.9[147555]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:21 compute-0 sudo[147553]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:21 compute-0 sudo[147678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kankpkttwqnzxdspowwuogvwlslnbvwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525780.6136906-557-10430076718421/AnsiballZ_copy.py'
Jan 27 14:56:21 compute-0 sudo[147678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:21 compute-0 python3.9[147680]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769525780.6136906-557-10430076718421/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:21 compute-0 sudo[147678]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:22 compute-0 sudo[147830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hddrmsczwdeqknriygnggshfkiejykpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525781.8269823-670-79545071092151/AnsiballZ_command.py'
Jan 27 14:56:22 compute-0 sudo[147830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:22 compute-0 python3.9[147832]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 27 14:56:22 compute-0 sudo[147830]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:22 compute-0 sudo[147983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmkrogubfiqvijtvbtgbhyotjeraxyes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525782.719555-679-253022434246925/AnsiballZ_file.py'
Jan 27 14:56:22 compute-0 sudo[147983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:23 compute-0 python3.9[147985]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:23 compute-0 sudo[147983]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:23 compute-0 sudo[148135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htaufalezjuevjjckocfqxkguwimsbjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525783.3158154-679-101084170908805/AnsiballZ_file.py'
Jan 27 14:56:23 compute-0 sudo[148135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:23 compute-0 python3.9[148137]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:23 compute-0 sudo[148135]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:24 compute-0 sudo[148287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evkprdwkqigxeconkuaezrjszkjqyica ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525783.9128025-679-15130392424070/AnsiballZ_file.py'
Jan 27 14:56:24 compute-0 sudo[148287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:24 compute-0 python3.9[148289]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:24 compute-0 sudo[148287]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:24 compute-0 sudo[148439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncpkphrujqcgfboclblwtkrfnmfxnsxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525784.481841-679-105864608563280/AnsiballZ_file.py'
Jan 27 14:56:24 compute-0 sudo[148439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:24 compute-0 python3.9[148441]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:24 compute-0 sudo[148439]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:25 compute-0 sudo[148591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auppucinhqncuqxjkfrvotmxahimmdyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525785.1039755-679-21084741223986/AnsiballZ_file.py'
Jan 27 14:56:25 compute-0 sudo[148591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:25 compute-0 python3.9[148593]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:25 compute-0 sudo[148591]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:26 compute-0 sudo[148743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajdwiovnruylxraxvgwujbuvnqrmrjfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525785.784487-679-160306452829225/AnsiballZ_file.py'
Jan 27 14:56:26 compute-0 sudo[148743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:26 compute-0 python3.9[148745]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:26 compute-0 sudo[148743]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:26 compute-0 sudo[148895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aupgunwpaijxwgnugtsfnknztwtxfjzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525786.3996754-679-176164828883630/AnsiballZ_file.py'
Jan 27 14:56:26 compute-0 sudo[148895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:26 compute-0 python3.9[148897]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:26 compute-0 sudo[148895]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:27 compute-0 sudo[149047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqwpesamhhnnqciyqufiuccjbuekoixf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525787.004374-679-218559559558067/AnsiballZ_file.py'
Jan 27 14:56:27 compute-0 sudo[149047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:27 compute-0 python3.9[149049]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:27 compute-0 sudo[149047]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:27 compute-0 sudo[149199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlyytwllvlbwiupmfsezomrwvncdpcna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525787.6155329-679-57484952010506/AnsiballZ_file.py'
Jan 27 14:56:27 compute-0 sudo[149199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:28 compute-0 python3.9[149201]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:28 compute-0 sudo[149199]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:28 compute-0 sudo[149351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdbwgziafziyqajwwmnlrsjkumathily ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525788.2306306-679-105399109106807/AnsiballZ_file.py'
Jan 27 14:56:28 compute-0 sudo[149351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:28 compute-0 python3.9[149353]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:28 compute-0 sudo[149351]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:29 compute-0 sudo[149503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iowyrfmouhynstvscdtdcmpnecyvurfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525788.877863-679-186007484947721/AnsiballZ_file.py'
Jan 27 14:56:29 compute-0 sudo[149503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:29 compute-0 python3.9[149505]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:29 compute-0 sudo[149503]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:29 compute-0 sudo[149655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-malhctanupvgeukbezwxqbsbxqgxtobv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525789.4742095-679-123226577841294/AnsiballZ_file.py'
Jan 27 14:56:29 compute-0 sudo[149655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:29 compute-0 python3.9[149657]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:29 compute-0 sudo[149655]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:30 compute-0 sudo[149807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqmskvsxpszzndjgpxkhpwbmbjkbfeln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525790.0837183-679-55252299309828/AnsiballZ_file.py'
Jan 27 14:56:30 compute-0 sudo[149807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:30 compute-0 python3.9[149809]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:30 compute-0 sudo[149807]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:30 compute-0 sudo[149959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxfstzwforpluuqdtyoqmkyaeaakguxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525790.6505625-679-184290671761201/AnsiballZ_file.py'
Jan 27 14:56:30 compute-0 sudo[149959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:31 compute-0 python3.9[149961]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:31 compute-0 sudo[149959]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:31 compute-0 sudo[150111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocebguogdenroeavblqxipljxjwrfwkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525791.3310935-778-271907936927648/AnsiballZ_stat.py'
Jan 27 14:56:31 compute-0 sudo[150111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:31 compute-0 python3.9[150113]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:31 compute-0 sudo[150111]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:32 compute-0 sudo[150246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etpsgrwmcgwvnaxzwudeasrjakxhdrxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525791.3310935-778-271907936927648/AnsiballZ_copy.py'
Jan 27 14:56:32 compute-0 sudo[150246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:32 compute-0 podman[150208]: 2026-01-27 14:56:32.189412462 +0000 UTC m=+0.048564292 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 14:56:32 compute-0 python3.9[150255]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525791.3310935-778-271907936927648/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:32 compute-0 sudo[150246]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:32 compute-0 sudo[150406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmizptulbhaylbicohjagswjckremvda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525792.508614-778-237258579447070/AnsiballZ_stat.py'
Jan 27 14:56:32 compute-0 sudo[150406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:32 compute-0 python3.9[150408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:32 compute-0 sudo[150406]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:33 compute-0 sudo[150529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwcifaktsthnedtsnjhkweamrefmlshm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525792.508614-778-237258579447070/AnsiballZ_copy.py'
Jan 27 14:56:33 compute-0 sudo[150529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:33 compute-0 python3.9[150531]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525792.508614-778-237258579447070/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:33 compute-0 sudo[150529]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:33 compute-0 sudo[150681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgvvztaetaxgomkvgshvgiyvymtftdns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525793.703704-778-14588806210097/AnsiballZ_stat.py'
Jan 27 14:56:33 compute-0 sudo[150681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:34 compute-0 python3.9[150683]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:34 compute-0 sudo[150681]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:34 compute-0 sudo[150804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrhjueoasamygayrafzkpvgckmzemhmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525793.703704-778-14588806210097/AnsiballZ_copy.py'
Jan 27 14:56:34 compute-0 sudo[150804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:34 compute-0 python3.9[150806]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525793.703704-778-14588806210097/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:34 compute-0 sudo[150804]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:35 compute-0 sudo[150956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcxzyieuxnztqsijsffazdejniyojnbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525794.9827178-778-11220474925178/AnsiballZ_stat.py'
Jan 27 14:56:35 compute-0 sudo[150956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:35 compute-0 python3.9[150958]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:35 compute-0 sudo[150956]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:35 compute-0 sudo[151079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkaknhohvljpkztimtebwsedukyjoqke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525794.9827178-778-11220474925178/AnsiballZ_copy.py'
Jan 27 14:56:35 compute-0 sudo[151079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:36 compute-0 python3.9[151081]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525794.9827178-778-11220474925178/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:36 compute-0 sudo[151079]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:36 compute-0 sudo[151249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgwsjkclvurkraehzhkanzkodtiyayhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525796.2584836-778-97190601424886/AnsiballZ_stat.py'
Jan 27 14:56:36 compute-0 sudo[151249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:36 compute-0 podman[151205]: 2026-01-27 14:56:36.586470492 +0000 UTC m=+0.074230977 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:56:36 compute-0 python3.9[151256]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:36 compute-0 sudo[151249]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:37 compute-0 sudo[151381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suakppcbdcgocxsixleewqclkagjufuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525796.2584836-778-97190601424886/AnsiballZ_copy.py'
Jan 27 14:56:37 compute-0 sudo[151381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:37 compute-0 python3.9[151383]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525796.2584836-778-97190601424886/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:37 compute-0 sudo[151381]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:37 compute-0 sudo[151533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byvlvxxcjvhjeqrkfwyzfhovosbdwiwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525797.4407036-778-125858682968125/AnsiballZ_stat.py'
Jan 27 14:56:37 compute-0 sudo[151533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:37 compute-0 python3.9[151535]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:37 compute-0 sudo[151533]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:38 compute-0 sudo[151656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtxnwolqgnxcocuvdpfzastmqnmerisg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525797.4407036-778-125858682968125/AnsiballZ_copy.py'
Jan 27 14:56:38 compute-0 sudo[151656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:38 compute-0 python3.9[151658]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525797.4407036-778-125858682968125/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:38 compute-0 sudo[151656]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:38 compute-0 sudo[151808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujucsjqiirphqhtacpndetqwyfkrdtlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525798.6516008-778-133155877767660/AnsiballZ_stat.py'
Jan 27 14:56:38 compute-0 sudo[151808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:39 compute-0 python3.9[151810]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:39 compute-0 sudo[151808]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:39 compute-0 sudo[151931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-barklmfnintgtzgessxxrzkmysdrjuom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525798.6516008-778-133155877767660/AnsiballZ_copy.py'
Jan 27 14:56:39 compute-0 sudo[151931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:39 compute-0 python3.9[151933]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525798.6516008-778-133155877767660/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:39 compute-0 sudo[151931]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:40 compute-0 sudo[152083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuqtiyufeojcqszirisifchrjqixcdup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525799.9276185-778-269228360199546/AnsiballZ_stat.py'
Jan 27 14:56:40 compute-0 sudo[152083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:40 compute-0 python3.9[152085]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:40 compute-0 sudo[152083]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:40 compute-0 sudo[152206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnrpqibvsamsiwkeqtzbnwjmtlvcqowq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525799.9276185-778-269228360199546/AnsiballZ_copy.py'
Jan 27 14:56:40 compute-0 sudo[152206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:41 compute-0 python3.9[152208]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525799.9276185-778-269228360199546/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:41 compute-0 sudo[152206]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:41 compute-0 sudo[152358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpqficmdyljroyklueyxnekcrfahufzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525801.1509235-778-57843440049675/AnsiballZ_stat.py'
Jan 27 14:56:41 compute-0 sudo[152358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:41 compute-0 python3.9[152360]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:41 compute-0 sudo[152358]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:41 compute-0 sudo[152481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwghejmtczsjwhvihuquynrywfvtnrev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525801.1509235-778-57843440049675/AnsiballZ_copy.py'
Jan 27 14:56:41 compute-0 sudo[152481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:42 compute-0 python3.9[152483]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525801.1509235-778-57843440049675/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:42 compute-0 sudo[152481]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:42 compute-0 sudo[152633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzboxmzuaieupngzeqlnrvvsaunktpot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525802.344426-778-90751501287521/AnsiballZ_stat.py'
Jan 27 14:56:42 compute-0 sudo[152633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:42 compute-0 python3.9[152635]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:42 compute-0 sudo[152633]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:43 compute-0 sudo[152756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbrixqonpwghqjjipuypagqopxgpyhbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525802.344426-778-90751501287521/AnsiballZ_copy.py'
Jan 27 14:56:43 compute-0 sudo[152756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:43 compute-0 python3.9[152758]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525802.344426-778-90751501287521/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:43 compute-0 sudo[152756]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:43 compute-0 sudo[152908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fscbodesddyaubhvpfughjmhumisthag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525803.5747945-778-60159216867229/AnsiballZ_stat.py'
Jan 27 14:56:43 compute-0 sudo[152908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:44 compute-0 python3.9[152910]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:44 compute-0 sudo[152908]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:44 compute-0 sudo[153031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytllqiuxpglaozeyammsbzcwtyrcxgvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525803.5747945-778-60159216867229/AnsiballZ_copy.py'
Jan 27 14:56:44 compute-0 sudo[153031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:44 compute-0 python3.9[153033]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525803.5747945-778-60159216867229/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:44 compute-0 sudo[153031]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:45 compute-0 sudo[153183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbhffukinlqackhtbgxhbmtsuucwuawf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525804.8143997-778-224331555652735/AnsiballZ_stat.py'
Jan 27 14:56:45 compute-0 sudo[153183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:45 compute-0 python3.9[153185]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:45 compute-0 sudo[153183]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:45 compute-0 sudo[153306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crnvlwuflczlkjqiwbkuirjbqrvqskin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525804.8143997-778-224331555652735/AnsiballZ_copy.py'
Jan 27 14:56:45 compute-0 sudo[153306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:45 compute-0 python3.9[153308]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525804.8143997-778-224331555652735/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:45 compute-0 sudo[153306]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:46 compute-0 sudo[153458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbzmkmngcliurlwoblhbgymgsmgbvtni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525806.1312478-778-251920416117968/AnsiballZ_stat.py'
Jan 27 14:56:46 compute-0 sudo[153458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:46 compute-0 python3.9[153460]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:46 compute-0 sudo[153458]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:46 compute-0 sudo[153581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udjnzienljqukxjvpfgxkewjkkzftfqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525806.1312478-778-251920416117968/AnsiballZ_copy.py'
Jan 27 14:56:46 compute-0 sudo[153581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:47 compute-0 python3.9[153583]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525806.1312478-778-251920416117968/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:47 compute-0 sudo[153581]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:47 compute-0 sudo[153733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijilbyichxdlvtnykunbztvbflmzqurz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525807.3456237-778-84932845066291/AnsiballZ_stat.py'
Jan 27 14:56:47 compute-0 sudo[153733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:47 compute-0 python3.9[153735]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:56:47 compute-0 sudo[153733]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:48 compute-0 sudo[153856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnmtkznggykudyptojoschtduxpmerpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525807.3456237-778-84932845066291/AnsiballZ_copy.py'
Jan 27 14:56:48 compute-0 sudo[153856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:48 compute-0 python3.9[153858]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525807.3456237-778-84932845066291/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:48 compute-0 sudo[153856]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:49 compute-0 python3.9[154008]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:56:49 compute-0 sudo[154161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhoerejadbdotbsvwvwjbloxwtjparjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525809.256303-984-93089153288232/AnsiballZ_seboolean.py'
Jan 27 14:56:49 compute-0 sudo[154161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:49 compute-0 python3.9[154163]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 27 14:56:51 compute-0 sudo[154161]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:51 compute-0 sudo[154317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scritxubtklmznlybrvjuaryoxowzsix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525811.435792-992-102832391563966/AnsiballZ_copy.py'
Jan 27 14:56:51 compute-0 dbus-broker-launch[811]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 27 14:56:51 compute-0 sudo[154317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:51 compute-0 python3.9[154319]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:51 compute-0 sudo[154317]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:52 compute-0 sudo[154469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acfuxyjmysigdlwtlxortqcwnyfrceoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525812.0329754-992-272714021936361/AnsiballZ_copy.py'
Jan 27 14:56:52 compute-0 sudo[154469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:52 compute-0 python3.9[154471]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:52 compute-0 sudo[154469]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:52 compute-0 sudo[154621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liqsaupxazwhvsrltglpbxihluujfgan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525812.629534-992-266482429901122/AnsiballZ_copy.py'
Jan 27 14:56:52 compute-0 sudo[154621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:53 compute-0 python3.9[154623]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:53 compute-0 sudo[154621]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:53 compute-0 sudo[154773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkizxggbncmtnjigrxmlktauhzxsaidj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525813.270686-992-225041591430036/AnsiballZ_copy.py'
Jan 27 14:56:53 compute-0 sudo[154773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:53 compute-0 python3.9[154775]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:53 compute-0 sudo[154773]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:54 compute-0 sudo[154925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpkaguwruueuatgrnewypbqqwtzcctos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525813.9777625-992-237326615137261/AnsiballZ_copy.py'
Jan 27 14:56:54 compute-0 sudo[154925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:54 compute-0 python3.9[154927]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:54 compute-0 sudo[154925]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:54 compute-0 sudo[155077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcserkyhlbsedadauqdklqdzewebtaof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525814.5974514-1028-131802729240206/AnsiballZ_copy.py'
Jan 27 14:56:54 compute-0 sudo[155077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:55 compute-0 python3.9[155079]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:55 compute-0 sudo[155077]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:55 compute-0 sudo[155229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkggjhdypomlqqvwoxvxmxingsxfwhvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525815.20678-1028-269297582578378/AnsiballZ_copy.py'
Jan 27 14:56:55 compute-0 sudo[155229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:55 compute-0 python3.9[155231]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:55 compute-0 sudo[155229]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:56 compute-0 sudo[155381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riklzeuuascnpzjuxknagalremvxlvdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525815.8085306-1028-39821985844597/AnsiballZ_copy.py'
Jan 27 14:56:56 compute-0 sudo[155381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:56 compute-0 python3.9[155383]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:56 compute-0 sudo[155381]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:56 compute-0 sudo[155533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzueepjxuofqrmgluulndhrrchmtfngz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525816.392183-1028-169614206190543/AnsiballZ_copy.py'
Jan 27 14:56:56 compute-0 sudo[155533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:56 compute-0 python3.9[155535]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:56 compute-0 sudo[155533]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:57 compute-0 sudo[155685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yznklbqvzvrbarbusmnliwddfwqzdpmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525817.0209796-1028-268744160901693/AnsiballZ_copy.py'
Jan 27 14:56:57 compute-0 sudo[155685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:57 compute-0 python3.9[155687]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:56:57 compute-0 sudo[155685]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:57 compute-0 sudo[155837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yehyvhisgaaoiagjpygphwvlltvvbqak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525817.6716437-1064-92548283951307/AnsiballZ_systemd.py'
Jan 27 14:56:57 compute-0 sudo[155837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:58 compute-0 python3.9[155839]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:56:58 compute-0 systemd[1]: Reloading.
Jan 27 14:56:58 compute-0 systemd-rc-local-generator[155865]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:56:58 compute-0 systemd-sysv-generator[155869]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:56:58 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 27 14:56:58 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 27 14:56:58 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 27 14:56:58 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 27 14:56:58 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 27 14:56:58 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 27 14:56:58 compute-0 sudo[155837]: pam_unix(sudo:session): session closed for user root
Jan 27 14:56:59 compute-0 sudo[156030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbamtclscchvxpczlnwujrqjxpgxjixa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525818.7905593-1064-142488714262188/AnsiballZ_systemd.py'
Jan 27 14:56:59 compute-0 sudo[156030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:56:59 compute-0 python3.9[156032]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:56:59 compute-0 systemd[1]: Reloading.
Jan 27 14:56:59 compute-0 systemd-rc-local-generator[156057]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:56:59 compute-0 systemd-sysv-generator[156061]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:56:59 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 27 14:56:59 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 27 14:56:59 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 27 14:56:59 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 27 14:56:59 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 27 14:56:59 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 27 14:56:59 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 27 14:56:59 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 27 14:56:59 compute-0 sudo[156030]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:00 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 27 14:57:00 compute-0 sudo[156246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-botobfokvpkngitybxpodxbwiiimzcnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525819.870301-1064-174936209712884/AnsiballZ_systemd.py'
Jan 27 14:57:00 compute-0 sudo[156246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:57:00.206 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:57:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:57:00.207 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:57:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:57:00.208 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:57:00 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 27 14:57:00 compute-0 python3.9[156248]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:57:00 compute-0 systemd[1]: Reloading.
Jan 27 14:57:00 compute-0 systemd-rc-local-generator[156273]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:57:00 compute-0 systemd-sysv-generator[156277]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:57:00 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 27 14:57:00 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 27 14:57:00 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 27 14:57:00 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 27 14:57:00 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 14:57:00 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 14:57:00 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 27 14:57:00 compute-0 sudo[156246]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:00 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 27 14:57:01 compute-0 sudo[156465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tutyqkgafxjwppoywsihzykbwlflsjgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525820.933939-1064-244319793013755/AnsiballZ_systemd.py'
Jan 27 14:57:01 compute-0 sudo[156465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:01 compute-0 python3.9[156467]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:57:01 compute-0 systemd[1]: Reloading.
Jan 27 14:57:01 compute-0 systemd-rc-local-generator[156494]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:57:01 compute-0 systemd-sysv-generator[156497]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:57:01 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 27 14:57:01 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 27 14:57:01 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 27 14:57:01 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 27 14:57:01 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 27 14:57:01 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 27 14:57:01 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 27 14:57:01 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 27 14:57:01 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 27 14:57:01 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 27 14:57:01 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 27 14:57:01 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 27 14:57:01 compute-0 setroubleshoot[156195]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l fd133eef-84c6-496a-89dd-3e7a10f5956c
Jan 27 14:57:01 compute-0 setroubleshoot[156195]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 27 14:57:01 compute-0 setroubleshoot[156195]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l fd133eef-84c6-496a-89dd-3e7a10f5956c
Jan 27 14:57:01 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 14:57:01 compute-0 sudo[156465]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:01 compute-0 setroubleshoot[156195]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 27 14:57:02 compute-0 podman[156631]: 2026-01-27 14:57:02.323670903 +0000 UTC m=+0.076435412 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 14:57:02 compute-0 sudo[156701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmocqjrnywujjkofisjdhozapjctzhfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525822.0495753-1064-178780194577857/AnsiballZ_systemd.py'
Jan 27 14:57:02 compute-0 sudo[156701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:02 compute-0 python3.9[156703]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:57:02 compute-0 systemd[1]: Reloading.
Jan 27 14:57:02 compute-0 systemd-sysv-generator[156734]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:57:02 compute-0 systemd-rc-local-generator[156730]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:57:02 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 27 14:57:02 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 27 14:57:02 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 27 14:57:02 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 27 14:57:02 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 27 14:57:02 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 27 14:57:02 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 27 14:57:02 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 27 14:57:02 compute-0 sudo[156701]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:03 compute-0 sudo[156912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewmnvhywnephuxeowhodctragygorvxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525823.4199784-1101-126105265811485/AnsiballZ_file.py'
Jan 27 14:57:03 compute-0 sudo[156912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:03 compute-0 python3.9[156914]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:03 compute-0 sudo[156912]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:04 compute-0 sudo[157064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyhwroouqxfijtjdiacwocyjoboxagxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525824.0681014-1109-182597109028710/AnsiballZ_find.py'
Jan 27 14:57:04 compute-0 sudo[157064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:04 compute-0 python3.9[157066]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 14:57:04 compute-0 sudo[157064]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:05 compute-0 sudo[157216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fccrzqflisyetitpdctrzemrocwispsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525824.938679-1123-110058258402428/AnsiballZ_stat.py'
Jan 27 14:57:05 compute-0 sudo[157216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:05 compute-0 python3.9[157218]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:57:05 compute-0 sudo[157216]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:05 compute-0 sudo[157339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwgjujgregmbpvcnamraqkjnaybaumah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525824.938679-1123-110058258402428/AnsiballZ_copy.py'
Jan 27 14:57:05 compute-0 sudo[157339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:05 compute-0 python3.9[157341]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525824.938679-1123-110058258402428/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:05 compute-0 sudo[157339]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:06 compute-0 sudo[157491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hufmzrfqvsltpjgtvkpwwkwwhwzyfwwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525826.311913-1139-25961505655947/AnsiballZ_file.py'
Jan 27 14:57:06 compute-0 sudo[157491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:06 compute-0 python3.9[157493]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:06 compute-0 sudo[157491]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:07 compute-0 sudo[157652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azweavfejcpgwbbotsmkquqkmvhxhnmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525826.9517176-1147-18622478610432/AnsiballZ_stat.py'
Jan 27 14:57:07 compute-0 sudo[157652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:07 compute-0 podman[157617]: 2026-01-27 14:57:07.287109894 +0000 UTC m=+0.084722049 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 14:57:07 compute-0 python3.9[157660]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:57:07 compute-0 sudo[157652]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:07 compute-0 sudo[157746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snfjvvcbmxogecbwgzibwzgtenazgryz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525826.9517176-1147-18622478610432/AnsiballZ_file.py'
Jan 27 14:57:07 compute-0 sudo[157746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:07 compute-0 python3.9[157748]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:07 compute-0 sudo[157746]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:08 compute-0 sudo[157898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cacipgbyxctkqesctvhcfygwbrzrblii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525828.1065054-1159-244895157101748/AnsiballZ_stat.py'
Jan 27 14:57:08 compute-0 sudo[157898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:08 compute-0 python3.9[157900]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:57:08 compute-0 sudo[157898]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:08 compute-0 sudo[157976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peswjxfgimucekarcwryhokmxjvtnehy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525828.1065054-1159-244895157101748/AnsiballZ_file.py'
Jan 27 14:57:08 compute-0 sudo[157976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:08 compute-0 python3.9[157978]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.gsuyo6jh recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:09 compute-0 sudo[157976]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:09 compute-0 sudo[158128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meisdocaqcjwezjeicoysvmezzyurkey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525829.1853526-1171-71572311596587/AnsiballZ_stat.py'
Jan 27 14:57:09 compute-0 sudo[158128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:09 compute-0 python3.9[158130]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:57:09 compute-0 sudo[158128]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:09 compute-0 sudo[158206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luoieitemybtapmreqiluyntkasmmgjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525829.1853526-1171-71572311596587/AnsiballZ_file.py'
Jan 27 14:57:09 compute-0 sudo[158206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:10 compute-0 python3.9[158208]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:10 compute-0 sudo[158206]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:10 compute-0 sudo[158358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nourkjcvzfdqmlbrwpklvoivtkqnvyvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525830.3279953-1184-229876478327290/AnsiballZ_command.py'
Jan 27 14:57:10 compute-0 sudo[158358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:10 compute-0 python3.9[158360]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:57:10 compute-0 sudo[158358]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:11 compute-0 sudo[158511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-benszlozombfcmishdjgkntdlixpqdwh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769525831.0912054-1192-137316454685687/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 14:57:11 compute-0 sudo[158511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:11 compute-0 python3[158513]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 14:57:11 compute-0 sudo[158511]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:11 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 27 14:57:11 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.060s CPU time.
Jan 27 14:57:11 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 27 14:57:12 compute-0 sudo[158663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zimqqntviajbvtahxberdbqrwkxtpzxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525832.023123-1200-81753697129743/AnsiballZ_stat.py'
Jan 27 14:57:12 compute-0 sudo[158663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:12 compute-0 python3.9[158665]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:57:12 compute-0 sudo[158663]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:12 compute-0 sudo[158741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvxjmmvwnimcdyzjruwyubcuniinshtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525832.023123-1200-81753697129743/AnsiballZ_file.py'
Jan 27 14:57:12 compute-0 sudo[158741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:13 compute-0 python3.9[158743]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:13 compute-0 sudo[158741]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:13 compute-0 sudo[158893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqdrbsbrlmzryayrgnlpwpydccqjugry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525833.460756-1212-204492910134469/AnsiballZ_stat.py'
Jan 27 14:57:13 compute-0 sudo[158893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:13 compute-0 python3.9[158895]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:57:14 compute-0 sudo[158893]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:14 compute-0 sudo[159018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjpvqxtjjypyiigcjiwanmdcsyxdxbvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525833.460756-1212-204492910134469/AnsiballZ_copy.py'
Jan 27 14:57:14 compute-0 sudo[159018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:14 compute-0 python3.9[159020]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525833.460756-1212-204492910134469/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:14 compute-0 sudo[159018]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:15 compute-0 sudo[159170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhmcqupiwpiywvfcnslurlhplttjaeqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525834.7620437-1227-153265913985824/AnsiballZ_stat.py'
Jan 27 14:57:15 compute-0 sudo[159170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:15 compute-0 python3.9[159172]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:57:15 compute-0 sudo[159170]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:15 compute-0 sudo[159248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpbeodryimrupuqzdffvwroedysqtvnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525834.7620437-1227-153265913985824/AnsiballZ_file.py'
Jan 27 14:57:15 compute-0 sudo[159248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:15 compute-0 python3.9[159250]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:15 compute-0 sudo[159248]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:16 compute-0 sudo[159400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcnzsvojjktbzcsobmdpoqfshkvllalg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525835.9016733-1239-148622146612102/AnsiballZ_stat.py'
Jan 27 14:57:16 compute-0 sudo[159400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:16 compute-0 python3.9[159402]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:57:16 compute-0 sudo[159400]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:16 compute-0 sudo[159478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbfohxrxnosdznhwuncqkngyhkjxfxeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525835.9016733-1239-148622146612102/AnsiballZ_file.py'
Jan 27 14:57:16 compute-0 sudo[159478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:16 compute-0 python3.9[159480]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:16 compute-0 sudo[159478]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:17 compute-0 sudo[159630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bykhyiwmdnjooxthedkvenqwwtzbnqsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525836.9323275-1251-164391973982291/AnsiballZ_stat.py'
Jan 27 14:57:17 compute-0 sudo[159630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:17 compute-0 python3.9[159632]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:57:17 compute-0 sudo[159630]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:17 compute-0 sudo[159755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fitmyiobyaqgpmdywynkacawtizwzcjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525836.9323275-1251-164391973982291/AnsiballZ_copy.py'
Jan 27 14:57:17 compute-0 sudo[159755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:17 compute-0 python3.9[159757]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769525836.9323275-1251-164391973982291/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:18 compute-0 sudo[159755]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:18 compute-0 sudo[159907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhunpcxchsagpnmgtjryprutbygzksdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525838.17599-1266-33742112710661/AnsiballZ_file.py'
Jan 27 14:57:18 compute-0 sudo[159907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:18 compute-0 python3.9[159909]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:18 compute-0 sudo[159907]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:19 compute-0 sudo[160059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjgnmakdspnburwyogfgqpnqpjaohmba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525838.8175886-1274-159817453177613/AnsiballZ_command.py'
Jan 27 14:57:19 compute-0 sudo[160059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:19 compute-0 python3.9[160061]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:57:19 compute-0 sudo[160059]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:19 compute-0 sudo[160214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uguqynoafxsszzwgqymwpoatvwbyhvez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525839.487575-1282-54851385142843/AnsiballZ_blockinfile.py'
Jan 27 14:57:19 compute-0 sudo[160214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:20 compute-0 python3.9[160216]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:20 compute-0 sudo[160214]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:20 compute-0 sudo[160366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgskfpzbrgwwwbkaehzmejscbsxlkrpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525840.3983424-1291-105796422076858/AnsiballZ_command.py'
Jan 27 14:57:20 compute-0 sudo[160366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:20 compute-0 python3.9[160368]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:57:20 compute-0 sudo[160366]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:21 compute-0 sudo[160519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbmxawajpcumuykrhewbvjhfdvemwnjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525841.2923753-1299-40082570955613/AnsiballZ_stat.py'
Jan 27 14:57:21 compute-0 sudo[160519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:21 compute-0 python3.9[160521]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:57:21 compute-0 sudo[160519]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:22 compute-0 sudo[160673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdruhohrijkrfscfxlngksboghakhuwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525842.010308-1307-278871175934246/AnsiballZ_command.py'
Jan 27 14:57:22 compute-0 sudo[160673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:22 compute-0 python3.9[160675]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:57:22 compute-0 sudo[160673]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:22 compute-0 sudo[160828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qubpvpszwklhefeaxnpysbecgsnsturj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525842.7445962-1315-136876625051497/AnsiballZ_file.py'
Jan 27 14:57:22 compute-0 sudo[160828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:23 compute-0 python3.9[160830]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:23 compute-0 sudo[160828]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:23 compute-0 sudo[160980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bykcjomoxkijgmflumilafpkqvfztksw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525843.3376713-1323-6167830574461/AnsiballZ_stat.py'
Jan 27 14:57:23 compute-0 sudo[160980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:23 compute-0 python3.9[160982]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:57:23 compute-0 sudo[160980]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:24 compute-0 sudo[161103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taozrzuaazclicdmdjxcnblevbrcdoou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525843.3376713-1323-6167830574461/AnsiballZ_copy.py'
Jan 27 14:57:24 compute-0 sudo[161103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:24 compute-0 python3.9[161105]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525843.3376713-1323-6167830574461/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:24 compute-0 sudo[161103]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:24 compute-0 sudo[161255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxrvmkhurmgpflpamxmvxsimrmagbmxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525844.5570922-1338-165516505844374/AnsiballZ_stat.py'
Jan 27 14:57:24 compute-0 sudo[161255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:25 compute-0 python3.9[161257]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:57:25 compute-0 sudo[161255]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:25 compute-0 sudo[161378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiycjbvxpvekiikgicplfkysloucbgso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525844.5570922-1338-165516505844374/AnsiballZ_copy.py'
Jan 27 14:57:25 compute-0 sudo[161378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:25 compute-0 python3.9[161380]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525844.5570922-1338-165516505844374/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:25 compute-0 sudo[161378]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:26 compute-0 sudo[161530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icnmkyzddxokqgqwakxxemdngaumpkic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525846.0956566-1353-12850355284196/AnsiballZ_stat.py'
Jan 27 14:57:26 compute-0 sudo[161530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:26 compute-0 python3.9[161532]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:57:26 compute-0 sudo[161530]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:27 compute-0 sudo[161653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdhypinjidtlccpmkdhyzrlstpqkddje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525846.0956566-1353-12850355284196/AnsiballZ_copy.py'
Jan 27 14:57:27 compute-0 sudo[161653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:27 compute-0 python3.9[161655]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525846.0956566-1353-12850355284196/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:27 compute-0 sudo[161653]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:27 compute-0 sudo[161805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dojfqquajlvjzdmkrpvbjwwnmltrtjsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525847.3895967-1368-120986172707206/AnsiballZ_systemd.py'
Jan 27 14:57:27 compute-0 sudo[161805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:28 compute-0 python3.9[161807]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:57:28 compute-0 systemd[1]: Reloading.
Jan 27 14:57:28 compute-0 systemd-sysv-generator[161839]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:57:28 compute-0 systemd-rc-local-generator[161833]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:57:28 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 27 14:57:28 compute-0 sudo[161805]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:29 compute-0 sudo[161997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oprnudstlgyhuvfaacfarnzkkirulpzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525848.732485-1376-5588513638206/AnsiballZ_systemd.py'
Jan 27 14:57:29 compute-0 sudo[161997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:29 compute-0 python3.9[161999]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 27 14:57:29 compute-0 systemd[1]: Reloading.
Jan 27 14:57:29 compute-0 systemd-rc-local-generator[162026]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:57:29 compute-0 systemd-sysv-generator[162030]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:57:29 compute-0 systemd[1]: Reloading.
Jan 27 14:57:29 compute-0 systemd-rc-local-generator[162063]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:57:29 compute-0 systemd-sysv-generator[162067]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:57:29 compute-0 sudo[161997]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:30 compute-0 sshd-session[107367]: Connection closed by 192.168.122.30 port 57848
Jan 27 14:57:30 compute-0 sshd-session[107364]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:57:30 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Jan 27 14:57:30 compute-0 systemd[1]: session-23.scope: Consumed 3min 18.670s CPU time.
Jan 27 14:57:30 compute-0 systemd-logind[820]: Session 23 logged out. Waiting for processes to exit.
Jan 27 14:57:30 compute-0 systemd-logind[820]: Removed session 23.
Jan 27 14:57:33 compute-0 podman[162096]: 2026-01-27 14:57:33.297540851 +0000 UTC m=+0.052377093 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 14:57:36 compute-0 sshd-session[162115]: Accepted publickey for zuul from 192.168.122.30 port 48402 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:57:36 compute-0 systemd-logind[820]: New session 24 of user zuul.
Jan 27 14:57:36 compute-0 systemd[1]: Started Session 24 of User zuul.
Jan 27 14:57:36 compute-0 sshd-session[162115]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:57:37 compute-0 python3.9[162268]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:57:38 compute-0 podman[162349]: 2026-01-27 14:57:38.371041661 +0000 UTC m=+0.124143454 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 14:57:38 compute-0 python3.9[162448]: ansible-ansible.builtin.service_facts Invoked
Jan 27 14:57:38 compute-0 network[162465]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 14:57:38 compute-0 network[162466]: 'network-scripts' will be removed from distribution in near future.
Jan 27 14:57:38 compute-0 network[162467]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 14:57:42 compute-0 sudo[162736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdaroyabmrqvontrbonbgmalakyahrmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525862.057488-42-63450397276837/AnsiballZ_setup.py'
Jan 27 14:57:42 compute-0 sudo[162736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:42 compute-0 python3.9[162738]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 14:57:42 compute-0 sudo[162736]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:43 compute-0 sudo[162820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwkkkyswqfqdhgbxkcrvrohoynkotooo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525862.057488-42-63450397276837/AnsiballZ_dnf.py'
Jan 27 14:57:43 compute-0 sudo[162820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:43 compute-0 python3.9[162822]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:57:49 compute-0 sudo[162820]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:49 compute-0 sudo[162973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqdczhciotfkvtjswyowoordwmyuxqhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525869.4542954-54-178487335879543/AnsiballZ_stat.py'
Jan 27 14:57:49 compute-0 sudo[162973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:50 compute-0 python3.9[162975]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:57:50 compute-0 sudo[162973]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:50 compute-0 sudo[163125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbefmkldbyvtlchnteatxpnscblbrdai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525870.3102767-64-12595161056444/AnsiballZ_command.py'
Jan 27 14:57:50 compute-0 sudo[163125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:50 compute-0 python3.9[163127]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:57:50 compute-0 sudo[163125]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:51 compute-0 sudo[163278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruxyfywmoqlhrxghgjtwjinheelnlpos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525871.185056-74-154530711184941/AnsiballZ_stat.py'
Jan 27 14:57:51 compute-0 sudo[163278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:51 compute-0 python3.9[163280]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:57:51 compute-0 sudo[163278]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:52 compute-0 sudo[163430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owenbftwwozzzbkohksxjdxiqhzkwwxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525871.7778125-82-112749557797984/AnsiballZ_command.py'
Jan 27 14:57:52 compute-0 sudo[163430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:52 compute-0 python3.9[163432]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:57:52 compute-0 sudo[163430]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:52 compute-0 sudo[163583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sanhvbfrutoawnflibhhfuwtvpnjdpai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525872.4457262-90-96885442360632/AnsiballZ_stat.py'
Jan 27 14:57:52 compute-0 sudo[163583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:52 compute-0 python3.9[163585]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:57:53 compute-0 sudo[163583]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:53 compute-0 sudo[163706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfhfrpannfgyutjiwdaczktinbbmkqan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525872.4457262-90-96885442360632/AnsiballZ_copy.py'
Jan 27 14:57:53 compute-0 sudo[163706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:53 compute-0 python3.9[163708]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525872.4457262-90-96885442360632/.source.iscsi _original_basename=.4e5fq56x follow=False checksum=47ab257295fffcf6c98138920ee4af1cd8a1b052 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:53 compute-0 sudo[163706]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:54 compute-0 sudo[163858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdpqktifmndkqzjyckqrakcrzxsiiwma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525873.919686-105-49130229162375/AnsiballZ_file.py'
Jan 27 14:57:54 compute-0 sudo[163858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:54 compute-0 python3.9[163860]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:54 compute-0 sudo[163858]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:55 compute-0 sudo[164010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krpqbywqzrnwopuuryobhmdjoakxlulw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525874.754222-113-209070268399937/AnsiballZ_lineinfile.py'
Jan 27 14:57:55 compute-0 sudo[164010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:55 compute-0 python3.9[164012]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:57:55 compute-0 sudo[164010]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:56 compute-0 sudo[164162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnevungrrpgbxsurfuukhpnxiqduwzsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525875.6313493-122-207880526920025/AnsiballZ_systemd_service.py'
Jan 27 14:57:56 compute-0 sudo[164162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:56 compute-0 python3.9[164164]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:57:56 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 27 14:57:56 compute-0 sudo[164162]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:57 compute-0 sudo[164318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkjxwnnusfoorhvesyvaaknfmdqcnhiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525876.8646562-130-156071720074292/AnsiballZ_systemd_service.py'
Jan 27 14:57:57 compute-0 sudo[164318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:57:57 compute-0 python3.9[164320]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:57:57 compute-0 systemd[1]: Reloading.
Jan 27 14:57:57 compute-0 systemd-sysv-generator[164353]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:57:57 compute-0 systemd-rc-local-generator[164348]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:57:57 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 27 14:57:57 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 27 14:57:57 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 27 14:57:57 compute-0 systemd[1]: Started Open-iSCSI.
Jan 27 14:57:57 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 27 14:57:57 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 27 14:57:57 compute-0 sudo[164318]: pam_unix(sudo:session): session closed for user root
Jan 27 14:57:58 compute-0 python3.9[164520]: ansible-ansible.builtin.service_facts Invoked
Jan 27 14:57:58 compute-0 network[164537]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 14:57:58 compute-0 network[164538]: 'network-scripts' will be removed from distribution in near future.
Jan 27 14:57:58 compute-0 network[164539]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 14:58:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:58:00.207 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:58:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:58:00.208 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:58:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:58:00.208 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:58:02 compute-0 sudo[164808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftychrmqkfbfvvzrohsavijgnqzjmgjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525882.0348961-153-96787527043076/AnsiballZ_dnf.py'
Jan 27 14:58:02 compute-0 sudo[164808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:02 compute-0 python3.9[164810]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:58:04 compute-0 podman[164814]: 2026-01-27 14:58:04.308210495 +0000 UTC m=+0.057354836 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 14:58:05 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 14:58:05 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 14:58:05 compute-0 systemd[1]: Reloading.
Jan 27 14:58:05 compute-0 systemd-sysv-generator[164878]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:58:05 compute-0 systemd-rc-local-generator[164874]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:58:05 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 14:58:06 compute-0 sudo[164808]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:06 compute-0 sudo[165142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohixnsvizeehvvmyemjcowznvibcwshm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525886.3935893-162-130444139591915/AnsiballZ_file.py'
Jan 27 14:58:06 compute-0 sudo[165142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:06 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 14:58:06 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 14:58:06 compute-0 systemd[1]: run-r181065d7f0b4486f9625a7d0cc76e0a8.service: Deactivated successfully.
Jan 27 14:58:06 compute-0 python3.9[165144]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 27 14:58:06 compute-0 sudo[165142]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:07 compute-0 sudo[165295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kladekfkvhxgiplctftrflxtnlutofeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525887.3230174-170-264695502538984/AnsiballZ_modprobe.py'
Jan 27 14:58:07 compute-0 sudo[165295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:07 compute-0 python3.9[165297]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 27 14:58:08 compute-0 sudo[165295]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:08 compute-0 sudo[165461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auknitostrihvebeomsdqrbiicswwiwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525888.1997066-178-103847734864978/AnsiballZ_stat.py'
Jan 27 14:58:08 compute-0 sudo[165461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:08 compute-0 podman[165425]: 2026-01-27 14:58:08.5305292 +0000 UTC m=+0.084903879 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 14:58:08 compute-0 python3.9[165472]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:58:08 compute-0 sudo[165461]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:09 compute-0 sudo[165600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxobmbomseptsxgbvkhdmjwtpmaktuti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525888.1997066-178-103847734864978/AnsiballZ_copy.py'
Jan 27 14:58:09 compute-0 sudo[165600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:09 compute-0 python3.9[165602]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525888.1997066-178-103847734864978/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:09 compute-0 sudo[165600]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:09 compute-0 sudo[165752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raoopuyrwqgxfwgfnhiarlvlvdrheksl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525889.4750426-194-117248461999843/AnsiballZ_lineinfile.py'
Jan 27 14:58:09 compute-0 sudo[165752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:09 compute-0 python3.9[165754]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:09 compute-0 sudo[165752]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:10 compute-0 sudo[165904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzinmnppcllkbhvbirzeiubgggjxgmlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525890.1335196-202-118968966992339/AnsiballZ_systemd.py'
Jan 27 14:58:10 compute-0 sudo[165904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:11 compute-0 python3.9[165906]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:58:11 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 27 14:58:11 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 27 14:58:11 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 27 14:58:11 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 27 14:58:11 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 27 14:58:11 compute-0 sudo[165904]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:11 compute-0 sudo[166060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gutqhvexoeefrfdbkuotwwjwffiuqykg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525891.5447361-210-140806227748430/AnsiballZ_command.py'
Jan 27 14:58:11 compute-0 sudo[166060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:12 compute-0 python3.9[166062]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:58:12 compute-0 sudo[166060]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:12 compute-0 sudo[166213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yraqbqtotqxvzutxkkkvigcjevkqfumt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525892.348483-220-30798422732874/AnsiballZ_stat.py'
Jan 27 14:58:12 compute-0 sudo[166213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:12 compute-0 python3.9[166215]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:58:12 compute-0 sudo[166213]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:13 compute-0 sudo[166365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkwuogndefhvpfxxhhppvfkznqnfkubx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525893.0230994-229-97926841754693/AnsiballZ_stat.py'
Jan 27 14:58:13 compute-0 sudo[166365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:13 compute-0 python3.9[166367]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:58:13 compute-0 sudo[166365]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:13 compute-0 sudo[166488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yilwjejywqfxmcbhmzftxegwobsegqap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525893.0230994-229-97926841754693/AnsiballZ_copy.py'
Jan 27 14:58:13 compute-0 sudo[166488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:14 compute-0 python3.9[166490]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525893.0230994-229-97926841754693/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:14 compute-0 sudo[166488]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:14 compute-0 sudo[166640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txuiwofzwjrunutzahsoijdgkbnaohud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525894.2137556-244-38452365452271/AnsiballZ_command.py'
Jan 27 14:58:14 compute-0 sudo[166640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:14 compute-0 python3.9[166642]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:58:14 compute-0 sudo[166640]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:15 compute-0 sudo[166793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbgiqumhqrugywldybuzdwlvwiddaxka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525894.8654897-252-213135724055684/AnsiballZ_lineinfile.py'
Jan 27 14:58:15 compute-0 sudo[166793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:15 compute-0 python3.9[166795]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:15 compute-0 sudo[166793]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:16 compute-0 sudo[166945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmecvsyhvhzelicoiqzszkuxsuwzqjts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525895.5865417-260-141753794614298/AnsiballZ_replace.py'
Jan 27 14:58:16 compute-0 sudo[166945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:16 compute-0 python3.9[166947]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:16 compute-0 sudo[166945]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:16 compute-0 sudo[167097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaijurhbzuhqsvvcnsuqyisplacfzhde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525896.4276621-268-14146421277032/AnsiballZ_replace.py'
Jan 27 14:58:16 compute-0 sudo[167097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:16 compute-0 python3.9[167099]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:16 compute-0 sudo[167097]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:17 compute-0 sudo[167249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayezpuukgwfggrtvfedrjznvwnsiamus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525897.1346781-277-252233842071536/AnsiballZ_lineinfile.py'
Jan 27 14:58:17 compute-0 sudo[167249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:17 compute-0 python3.9[167251]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:17 compute-0 sudo[167249]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:17 compute-0 sudo[167401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kazwgpeqjgzfjdrxopmpbgwbwkpqpzvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525897.7479248-277-23347192807250/AnsiballZ_lineinfile.py'
Jan 27 14:58:17 compute-0 sudo[167401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:18 compute-0 python3.9[167403]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:18 compute-0 sudo[167401]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:18 compute-0 sudo[167553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gboxzbmdzzjypmdgglkmhrshgonfyeev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525898.3470488-277-87684883235640/AnsiballZ_lineinfile.py'
Jan 27 14:58:18 compute-0 sudo[167553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:18 compute-0 python3.9[167555]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:18 compute-0 sudo[167553]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:19 compute-0 sudo[167705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etebishvtfwixvjtklygirfyynboskbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525898.9474661-277-9615949543562/AnsiballZ_lineinfile.py'
Jan 27 14:58:19 compute-0 sudo[167705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:19 compute-0 python3.9[167707]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:19 compute-0 sudo[167705]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:19 compute-0 sudo[167857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmtrojkahdkzceyvcerwathqjlrfvauh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525899.59058-306-32946831260613/AnsiballZ_stat.py'
Jan 27 14:58:19 compute-0 sudo[167857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:20 compute-0 python3.9[167859]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:58:20 compute-0 sudo[167857]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:20 compute-0 sudo[168011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iygtbxxebwbtzjlxdzxckznanbaxueul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525900.2487864-314-27097778463916/AnsiballZ_command.py'
Jan 27 14:58:20 compute-0 sudo[168011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:20 compute-0 python3.9[168013]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:58:20 compute-0 sudo[168011]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:21 compute-0 sudo[168164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyyxkihuwxccuexkghpjyxxznhwdpnof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525900.926137-323-117808513665283/AnsiballZ_systemd_service.py'
Jan 27 14:58:21 compute-0 sudo[168164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:21 compute-0 python3.9[168166]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:58:21 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 27 14:58:21 compute-0 sudo[168164]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:21 compute-0 sudo[168320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpuingecewwgyacdgcrclxphulqgwnbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525901.742841-331-25975199852668/AnsiballZ_systemd_service.py'
Jan 27 14:58:21 compute-0 sudo[168320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:22 compute-0 python3.9[168322]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:58:22 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 27 14:58:22 compute-0 udevadm[168327]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 27 14:58:22 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 27 14:58:22 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 27 14:58:22 compute-0 multipathd[168331]: --------start up--------
Jan 27 14:58:22 compute-0 multipathd[168331]: read /etc/multipath.conf
Jan 27 14:58:22 compute-0 multipathd[168331]: path checkers start up
Jan 27 14:58:22 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 27 14:58:22 compute-0 sudo[168320]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:23 compute-0 sudo[168488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbymdjvipxphhwyfmqwjvaxovgeyavcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525903.016652-343-16351035721124/AnsiballZ_file.py'
Jan 27 14:58:23 compute-0 sudo[168488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:23 compute-0 python3.9[168490]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 27 14:58:23 compute-0 sudo[168488]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:23 compute-0 sudo[168640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqraxxrrtavlnspfmrehkeluhfaycjgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525903.6461184-351-257709190849982/AnsiballZ_modprobe.py'
Jan 27 14:58:23 compute-0 sudo[168640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:24 compute-0 python3.9[168642]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 27 14:58:24 compute-0 kernel: Key type psk registered
Jan 27 14:58:24 compute-0 sudo[168640]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:24 compute-0 sudo[168802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njqzylabtireoaifgmzwenmgztcjmlvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525904.3663-359-26603997386251/AnsiballZ_stat.py'
Jan 27 14:58:24 compute-0 sudo[168802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:24 compute-0 python3.9[168804]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:58:24 compute-0 sudo[168802]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:25 compute-0 sudo[168925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awevwkfxprjnycsojqnupppxmhbxcsbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525904.3663-359-26603997386251/AnsiballZ_copy.py'
Jan 27 14:58:25 compute-0 sudo[168925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:25 compute-0 python3.9[168927]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769525904.3663-359-26603997386251/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:25 compute-0 sudo[168925]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:25 compute-0 sudo[169077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofzmrhvxovhpxgedthokzspnntjbhixu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525905.7264342-375-171788133279160/AnsiballZ_lineinfile.py'
Jan 27 14:58:25 compute-0 sudo[169077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:26 compute-0 python3.9[169079]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:26 compute-0 sudo[169077]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:27 compute-0 sudo[169229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkidfruudfvsbdfkczymxdkbthobebbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525907.1705377-383-154785341173032/AnsiballZ_systemd.py'
Jan 27 14:58:27 compute-0 sudo[169229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:27 compute-0 python3.9[169231]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:58:27 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 27 14:58:27 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 27 14:58:27 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 27 14:58:27 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 27 14:58:27 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 27 14:58:27 compute-0 sudo[169229]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:28 compute-0 sudo[169385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sppogelpmxxidpfncmxipnhvowtmotpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525908.299259-391-258167244222387/AnsiballZ_dnf.py'
Jan 27 14:58:28 compute-0 sudo[169385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:28 compute-0 python3.9[169387]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 14:58:31 compute-0 systemd[1]: Reloading.
Jan 27 14:58:31 compute-0 systemd-rc-local-generator[169421]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:58:31 compute-0 systemd-sysv-generator[169424]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:58:31 compute-0 systemd[1]: Reloading.
Jan 27 14:58:31 compute-0 systemd-sysv-generator[169461]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:58:31 compute-0 systemd-rc-local-generator[169457]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:58:32 compute-0 systemd-logind[820]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 27 14:58:32 compute-0 systemd-logind[820]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 27 14:58:32 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 14:58:32 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 14:58:32 compute-0 systemd[1]: Reloading.
Jan 27 14:58:32 compute-0 systemd-rc-local-generator[169553]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:58:32 compute-0 systemd-sysv-generator[169556]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:58:32 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 14:58:33 compute-0 sudo[169385]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:33 compute-0 sudo[170725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryagkldlmznzqlyiogddbyxsjeqocmnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525913.441464-399-151031789304887/AnsiballZ_systemd_service.py'
Jan 27 14:58:33 compute-0 sudo[170725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:34 compute-0 python3.9[170745]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:58:34 compute-0 iscsid[164360]: iscsid shutting down.
Jan 27 14:58:34 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 27 14:58:34 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 27 14:58:34 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 27 14:58:34 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 27 14:58:34 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 27 14:58:34 compute-0 systemd[1]: Started Open-iSCSI.
Jan 27 14:58:34 compute-0 sudo[170725]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:34 compute-0 sudo[171016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewvcszkcyazbkdpzcoswsjogbfjqujpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525914.3482947-407-47386850453698/AnsiballZ_systemd_service.py'
Jan 27 14:58:34 compute-0 sudo[171016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:34 compute-0 podman[170982]: 2026-01-27 14:58:34.654558436 +0000 UTC m=+0.069337079 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:58:34 compute-0 python3.9[171023]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 14:58:34 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 27 14:58:34 compute-0 multipathd[168331]: exit (signal)
Jan 27 14:58:34 compute-0 multipathd[168331]: --------shut down-------
Jan 27 14:58:34 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 27 14:58:34 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 27 14:58:35 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 27 14:58:35 compute-0 multipathd[171037]: --------start up--------
Jan 27 14:58:35 compute-0 multipathd[171037]: read /etc/multipath.conf
Jan 27 14:58:35 compute-0 multipathd[171037]: path checkers start up
Jan 27 14:58:35 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 27 14:58:35 compute-0 sudo[171016]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 14:58:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 14:58:35 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.818s CPU time.
Jan 27 14:58:35 compute-0 systemd[1]: run-r024203fb8d6b4ecc9a413cf1cc505292.service: Deactivated successfully.
Jan 27 14:58:35 compute-0 python3.9[171195]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 14:58:36 compute-0 sudo[171349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtchbmnnalndgcrsvrsnydmrzxrnmhzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525916.352639-425-101828698290510/AnsiballZ_file.py'
Jan 27 14:58:36 compute-0 sudo[171349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:36 compute-0 python3.9[171351]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:36 compute-0 sudo[171349]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:37 compute-0 sudo[171501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyafsgoznyrtihhlmyuojxtnfvnwlvmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525917.2374506-436-248220511569464/AnsiballZ_systemd_service.py'
Jan 27 14:58:37 compute-0 sudo[171501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:37 compute-0 python3.9[171503]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 14:58:37 compute-0 systemd[1]: Reloading.
Jan 27 14:58:37 compute-0 systemd-rc-local-generator[171530]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:58:37 compute-0 systemd-sysv-generator[171533]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:58:38 compute-0 sudo[171501]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:38 compute-0 podman[171661]: 2026-01-27 14:58:38.780770155 +0000 UTC m=+0.133392466 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 14:58:38 compute-0 python3.9[171703]: ansible-ansible.builtin.service_facts Invoked
Jan 27 14:58:38 compute-0 network[171730]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 14:58:38 compute-0 network[171731]: 'network-scripts' will be removed from distribution in near future.
Jan 27 14:58:38 compute-0 network[171732]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 14:58:43 compute-0 sudo[172002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwcqklztacmqsoultjpwlxksvgcahuea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525923.5732822-455-249978438757235/AnsiballZ_systemd_service.py'
Jan 27 14:58:43 compute-0 sudo[172002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:44 compute-0 python3.9[172004]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:58:44 compute-0 sudo[172002]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:44 compute-0 sudo[172155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaqzllhogqincdwdkmepybyimmpbdmbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525924.414643-455-221376684131984/AnsiballZ_systemd_service.py'
Jan 27 14:58:44 compute-0 sudo[172155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:45 compute-0 python3.9[172157]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:58:45 compute-0 sudo[172155]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:45 compute-0 sudo[172308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqwcajyjsnsdwxfldgmrwkjkzxuzzguj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525925.2423196-455-112422418680348/AnsiballZ_systemd_service.py'
Jan 27 14:58:45 compute-0 sudo[172308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:45 compute-0 python3.9[172310]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:58:45 compute-0 sudo[172308]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:46 compute-0 sudo[172461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siyrxkbahqfpqmealgjtedftcggqtqoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525926.0456536-455-184410072996063/AnsiballZ_systemd_service.py'
Jan 27 14:58:46 compute-0 sudo[172461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:46 compute-0 python3.9[172463]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:58:46 compute-0 sudo[172461]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:47 compute-0 sudo[172614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eylprlqngbdtmbjaqjwodgodlxazrxur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525926.890787-455-145026921126198/AnsiballZ_systemd_service.py'
Jan 27 14:58:47 compute-0 sudo[172614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:47 compute-0 python3.9[172616]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:58:47 compute-0 sudo[172614]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:47 compute-0 sudo[172767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppurbteunqorhhomscujswrxnlfnegyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525927.6959484-455-184586912303377/AnsiballZ_systemd_service.py'
Jan 27 14:58:47 compute-0 sudo[172767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:48 compute-0 python3.9[172769]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:58:48 compute-0 sudo[172767]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:48 compute-0 sudo[172920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyrnetjtyjcjtdattviqmhcwaglndaex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525928.4512255-455-65456156723333/AnsiballZ_systemd_service.py'
Jan 27 14:58:48 compute-0 sudo[172920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:49 compute-0 python3.9[172922]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:58:49 compute-0 sudo[172920]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:49 compute-0 sudo[173073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngtsnynkzpmmmowlrcngiadnsuapdifv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525929.2255256-455-64129796358491/AnsiballZ_systemd_service.py'
Jan 27 14:58:49 compute-0 sudo[173073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:49 compute-0 python3.9[173075]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 14:58:49 compute-0 sudo[173073]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:50 compute-0 sudo[173226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tudcxgsvldhhgyofenqkatezswahgtmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525930.1814134-514-278945379682597/AnsiballZ_file.py'
Jan 27 14:58:50 compute-0 sudo[173226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:50 compute-0 python3.9[173228]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:50 compute-0 sudo[173226]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:51 compute-0 sudo[173378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jksqvwtqozzyoxfkfzjkmufcfijiszhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525930.8274243-514-5449648775285/AnsiballZ_file.py'
Jan 27 14:58:51 compute-0 sudo[173378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:51 compute-0 python3.9[173380]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:51 compute-0 sudo[173378]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:51 compute-0 sudo[173530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mccjdybmpqfuhnenvfhxkbspclfijpjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525931.4883502-514-54864347867297/AnsiballZ_file.py'
Jan 27 14:58:51 compute-0 sudo[173530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:51 compute-0 python3.9[173532]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:51 compute-0 sudo[173530]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:52 compute-0 sudo[173682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvzwaktlzsovdnxrbermxfzdprvlrybd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525932.1377344-514-230168038879552/AnsiballZ_file.py'
Jan 27 14:58:52 compute-0 sudo[173682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:52 compute-0 python3.9[173684]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:52 compute-0 sudo[173682]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:53 compute-0 sudo[173834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whrlpilrprxuyqlupujkllmnnlevceat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525932.813379-514-32807110881865/AnsiballZ_file.py'
Jan 27 14:58:53 compute-0 sudo[173834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:53 compute-0 python3.9[173836]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:53 compute-0 sudo[173834]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:53 compute-0 sudo[173986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtuccawfebkrdrhqokyqzmlrotbsbnde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525933.4458005-514-45204772649531/AnsiballZ_file.py'
Jan 27 14:58:53 compute-0 sudo[173986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:53 compute-0 python3.9[173988]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:53 compute-0 sudo[173986]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:54 compute-0 sudo[174138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtpcylgtixbqihjhceptuzamrxsmigmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525934.075568-514-165156798965346/AnsiballZ_file.py'
Jan 27 14:58:54 compute-0 sudo[174138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:54 compute-0 python3.9[174140]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:54 compute-0 sudo[174138]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:54 compute-0 sudo[174290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqxjfkxdxwnhhcaaycubtkfclspifxfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525934.6294734-514-161406344660405/AnsiballZ_file.py'
Jan 27 14:58:54 compute-0 sudo[174290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:55 compute-0 python3.9[174292]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:55 compute-0 sudo[174290]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:55 compute-0 sudo[174442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeudqwyiamsjghipqlpympeggesytpid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525935.2464244-571-157506580482270/AnsiballZ_file.py'
Jan 27 14:58:55 compute-0 sudo[174442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:55 compute-0 python3.9[174444]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:55 compute-0 sudo[174442]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:56 compute-0 sudo[174594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scgilrljcieefmugodywzzmflahsvgch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525936.1043556-571-196789014686954/AnsiballZ_file.py'
Jan 27 14:58:56 compute-0 sudo[174594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:56 compute-0 python3.9[174596]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:56 compute-0 sudo[174594]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:57 compute-0 sudo[174746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smkvajljjdaldlyalreneowdkdrijafb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525936.7080698-571-1293508564983/AnsiballZ_file.py'
Jan 27 14:58:57 compute-0 sudo[174746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:57 compute-0 python3.9[174748]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:57 compute-0 sudo[174746]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:57 compute-0 sudo[174898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puhbtidsqmtkwqcavoukuzhjrrsrxeck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525937.3560128-571-137556358424814/AnsiballZ_file.py'
Jan 27 14:58:57 compute-0 sudo[174898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:57 compute-0 python3.9[174900]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:57 compute-0 sudo[174898]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:58 compute-0 sudo[175050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycezkaozlwknksrsqqwkgiohpzzojbwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525938.0011444-571-75154535071313/AnsiballZ_file.py'
Jan 27 14:58:58 compute-0 sudo[175050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:58 compute-0 python3.9[175052]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:58 compute-0 sudo[175050]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:59 compute-0 sudo[175202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sveiblmexzyizwqsdnowjdqqvambphpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525938.7324157-571-69311058745518/AnsiballZ_file.py'
Jan 27 14:58:59 compute-0 sudo[175202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:59 compute-0 python3.9[175204]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:59 compute-0 sudo[175202]: pam_unix(sudo:session): session closed for user root
Jan 27 14:58:59 compute-0 sudo[175354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gehgxpwwokzbbmrfxeadcgezqlymodhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525939.3446934-571-127745175042532/AnsiballZ_file.py'
Jan 27 14:58:59 compute-0 sudo[175354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:58:59 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 27 14:58:59 compute-0 python3.9[175356]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:58:59 compute-0 sudo[175354]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:59:00.208 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 14:59:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:59:00.209 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 14:59:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 14:59:00.209 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 14:59:00 compute-0 sudo[175507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgfcmqhulvivhktehrxdrhlabpmirpto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525939.9653053-571-10442112922681/AnsiballZ_file.py'
Jan 27 14:59:00 compute-0 sudo[175507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:00 compute-0 python3.9[175509]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:59:00 compute-0 sudo[175507]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:00 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 14:59:00 compute-0 sudo[175660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfpjdpscyhmnzjoxydgsdjotybgknhro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525940.666587-629-266631467235263/AnsiballZ_command.py'
Jan 27 14:59:00 compute-0 sudo[175660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:01 compute-0 python3.9[175662]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:59:01 compute-0 sudo[175660]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:01 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 27 14:59:02 compute-0 python3.9[175814]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 14:59:02 compute-0 sudo[175965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gucyvjceilwiuicbdpdmwjbacilzcxcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525942.2597277-647-43742832933734/AnsiballZ_systemd_service.py'
Jan 27 14:59:02 compute-0 sudo[175965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:02 compute-0 python3.9[175967]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 14:59:02 compute-0 systemd[1]: Reloading.
Jan 27 14:59:02 compute-0 systemd-rc-local-generator[175993]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 14:59:02 compute-0 systemd-sysv-generator[175998]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:59:03 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 27 14:59:03 compute-0 sudo[175965]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:03 compute-0 sudo[176153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pytxaredgghuzlurpcsocjcdkmezfmqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525943.2993255-655-29206209938880/AnsiballZ_command.py'
Jan 27 14:59:03 compute-0 sudo[176153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:03 compute-0 python3.9[176155]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:59:03 compute-0 sudo[176153]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:04 compute-0 sudo[176306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfsybnfllnmerrcgcnaqzxocycdzkhle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525944.0605114-655-78731532337284/AnsiballZ_command.py'
Jan 27 14:59:04 compute-0 sudo[176306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:04 compute-0 python3.9[176308]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:59:04 compute-0 sudo[176306]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:04 compute-0 sudo[176470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adjortizowxcpwvsbtzgrfwktupscgan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525944.6957808-655-184240202088225/AnsiballZ_command.py'
Jan 27 14:59:04 compute-0 sudo[176470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:05 compute-0 podman[176433]: 2026-01-27 14:59:05.0005227 +0000 UTC m=+0.067766277 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:59:05 compute-0 python3.9[176478]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:59:05 compute-0 sudo[176470]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:05 compute-0 sudo[176630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdggxlscsanfjepbnedgjzzyopbwblsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525945.3567777-655-107183780074089/AnsiballZ_command.py'
Jan 27 14:59:05 compute-0 sudo[176630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:05 compute-0 python3.9[176632]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:59:05 compute-0 sudo[176630]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:06 compute-0 sudo[176783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpmezawkihrnjnglwnbbjvkvkgkmhlku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525946.055868-655-105240737043328/AnsiballZ_command.py'
Jan 27 14:59:06 compute-0 sudo[176783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:06 compute-0 python3.9[176785]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:59:07 compute-0 sudo[176783]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:07 compute-0 sudo[176936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqsqjxslxgosnsyssqjcxubedlwmzpvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525947.681439-655-231657524220881/AnsiballZ_command.py'
Jan 27 14:59:07 compute-0 sudo[176936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:08 compute-0 python3.9[176938]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:59:08 compute-0 sudo[176936]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:08 compute-0 sudo[177089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odwmhegucdrygpkzgmvlhnzfbjenuikt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525948.3221536-655-47710984166950/AnsiballZ_command.py'
Jan 27 14:59:08 compute-0 sudo[177089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:08 compute-0 python3.9[177091]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:59:08 compute-0 sudo[177089]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:08 compute-0 podman[177093]: 2026-01-27 14:59:08.958992968 +0000 UTC m=+0.082111055 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 14:59:09 compute-0 sudo[177266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktvnzjinbtziypvsifmdspnswabvnomb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525949.0071542-655-187477146554026/AnsiballZ_command.py'
Jan 27 14:59:09 compute-0 sudo[177266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:09 compute-0 python3.9[177268]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 14:59:09 compute-0 sudo[177266]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:10 compute-0 sudo[177419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buzrukzxnbncnjgxwsahyfrmltdelico ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525950.3855543-734-29864649068409/AnsiballZ_file.py'
Jan 27 14:59:10 compute-0 sudo[177419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:10 compute-0 python3.9[177421]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:10 compute-0 sudo[177419]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:11 compute-0 sudo[177571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrmeqsgmicomwpikfnekzudnevxupekf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525951.028359-734-160679501961930/AnsiballZ_file.py'
Jan 27 14:59:11 compute-0 sudo[177571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:11 compute-0 python3.9[177573]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:11 compute-0 sudo[177571]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:11 compute-0 sudo[177723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xewldzallsnojhkdzihmhrupnfzffryc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525951.695292-734-202446810429577/AnsiballZ_file.py'
Jan 27 14:59:11 compute-0 sudo[177723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:12 compute-0 python3.9[177725]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:12 compute-0 sudo[177723]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:12 compute-0 sudo[177875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvkubcomcmkwohzayaqtltlmrdtqmbfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525952.5028343-756-228842221333833/AnsiballZ_file.py'
Jan 27 14:59:12 compute-0 sudo[177875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:12 compute-0 python3.9[177877]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:13 compute-0 sudo[177875]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:13 compute-0 sudo[178027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhyseqbeyfhlrilkkewhcughqmzirvnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525953.1691787-756-114948098651563/AnsiballZ_file.py'
Jan 27 14:59:13 compute-0 sudo[178027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:13 compute-0 python3.9[178029]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:13 compute-0 sudo[178027]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:14 compute-0 sudo[178179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfgqapvnnksrnletbucuracpvzaxtbnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525953.8142767-756-5553006423268/AnsiballZ_file.py'
Jan 27 14:59:14 compute-0 sudo[178179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:14 compute-0 python3.9[178181]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:14 compute-0 sudo[178179]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:14 compute-0 sudo[178331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-intgksasgvfpiddgbrsenwyawqnwwbml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525954.5063565-756-153423974688481/AnsiballZ_file.py'
Jan 27 14:59:14 compute-0 sudo[178331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:15 compute-0 python3.9[178333]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:15 compute-0 sudo[178331]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:15 compute-0 sudo[178483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klsvyttlhofbyxqycuxzgoyqcnmfmpiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525955.1904595-756-247978877874890/AnsiballZ_file.py'
Jan 27 14:59:15 compute-0 sudo[178483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:15 compute-0 python3.9[178485]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:15 compute-0 sudo[178483]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:16 compute-0 sudo[178635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uohzqlgbsqdjdcsumlapvnrgqxpptbbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525955.9296012-756-7330651689470/AnsiballZ_file.py'
Jan 27 14:59:16 compute-0 sudo[178635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:16 compute-0 python3.9[178637]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:16 compute-0 sudo[178635]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:16 compute-0 sudo[178787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idfixhnpejglffetlktymwfrbittayxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525956.6909459-756-88603505670802/AnsiballZ_file.py'
Jan 27 14:59:16 compute-0 sudo[178787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:17 compute-0 python3.9[178789]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:17 compute-0 sudo[178787]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:22 compute-0 sudo[178939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eghldptfljxtrteqtzwphiftyczfteox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525962.185161-925-189633536326884/AnsiballZ_getent.py'
Jan 27 14:59:22 compute-0 sudo[178939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:22 compute-0 python3.9[178941]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 27 14:59:22 compute-0 sudo[178939]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:23 compute-0 sudo[179092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvlmvolbxszqzcxvutnoesbuwgughyud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525963.0754743-933-169654103842036/AnsiballZ_group.py'
Jan 27 14:59:23 compute-0 sudo[179092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:23 compute-0 python3.9[179094]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 14:59:24 compute-0 groupadd[179095]: group added to /etc/group: name=nova, GID=42436
Jan 27 14:59:24 compute-0 groupadd[179095]: group added to /etc/gshadow: name=nova
Jan 27 14:59:24 compute-0 groupadd[179095]: new group: name=nova, GID=42436
Jan 27 14:59:24 compute-0 sudo[179092]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:24 compute-0 sudo[179250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khppljpevwjyyckxlbulatuedzukhaax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525964.4672236-941-125074119232681/AnsiballZ_user.py'
Jan 27 14:59:24 compute-0 sudo[179250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:25 compute-0 python3.9[179252]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 14:59:25 compute-0 useradd[179254]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 27 14:59:26 compute-0 useradd[179254]: add 'nova' to group 'libvirt'
Jan 27 14:59:26 compute-0 useradd[179254]: add 'nova' to shadow group 'libvirt'
Jan 27 14:59:27 compute-0 sudo[179250]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:28 compute-0 sshd-session[179285]: Accepted publickey for zuul from 192.168.122.30 port 52458 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 14:59:28 compute-0 systemd-logind[820]: New session 25 of user zuul.
Jan 27 14:59:28 compute-0 systemd[1]: Started Session 25 of User zuul.
Jan 27 14:59:28 compute-0 sshd-session[179285]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 14:59:28 compute-0 sshd-session[179288]: Received disconnect from 192.168.122.30 port 52458:11: disconnected by user
Jan 27 14:59:28 compute-0 sshd-session[179288]: Disconnected from user zuul 192.168.122.30 port 52458
Jan 27 14:59:28 compute-0 sshd-session[179285]: pam_unix(sshd:session): session closed for user zuul
Jan 27 14:59:28 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Jan 27 14:59:28 compute-0 systemd-logind[820]: Session 25 logged out. Waiting for processes to exit.
Jan 27 14:59:28 compute-0 systemd-logind[820]: Removed session 25.
Jan 27 14:59:29 compute-0 python3.9[179438]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:59:30 compute-0 python3.9[179559]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525968.9387155-966-255333979516687/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:30 compute-0 python3.9[179709]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:59:31 compute-0 python3.9[179785]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:31 compute-0 python3.9[179935]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:59:32 compute-0 python3.9[180056]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525971.3732638-966-138028885718353/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:33 compute-0 python3.9[180206]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:59:33 compute-0 python3.9[180327]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525972.7336664-966-67831852577210/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:34 compute-0 python3.9[180477]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:59:35 compute-0 python3.9[180598]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525974.0171773-966-3606895733632/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:35 compute-0 podman[180599]: 2026-01-27 14:59:35.348375905 +0000 UTC m=+0.094682083 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 27 14:59:35 compute-0 python3.9[180767]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:59:36 compute-0 python3.9[180888]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525975.4237745-966-176875408833696/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:37 compute-0 sudo[181038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pieydwdiifikkukjyqvyylzuqjulygsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525976.764472-1049-49412600634773/AnsiballZ_file.py'
Jan 27 14:59:37 compute-0 sudo[181038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:37 compute-0 python3.9[181040]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:59:37 compute-0 sudo[181038]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:37 compute-0 sudo[181190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsglkndmztrjmxsqqvrekzpismrlwdvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525977.4592113-1057-277585845091258/AnsiballZ_copy.py'
Jan 27 14:59:37 compute-0 sudo[181190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:37 compute-0 python3.9[181192]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:59:37 compute-0 sudo[181190]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:38 compute-0 sudo[181342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bozjydpvxkokacqkdzrwpijocysjmxgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525978.1676257-1065-31193732550743/AnsiballZ_stat.py'
Jan 27 14:59:38 compute-0 sudo[181342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:38 compute-0 python3.9[181344]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:59:38 compute-0 sudo[181342]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:39 compute-0 sudo[181510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaeiixztezcklsagyborwmrncraraqaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525978.9208102-1073-17522492213950/AnsiballZ_stat.py'
Jan 27 14:59:39 compute-0 sudo[181510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:39 compute-0 podman[181468]: 2026-01-27 14:59:39.341492296 +0000 UTC m=+0.118527885 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 14:59:39 compute-0 python3.9[181519]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:59:39 compute-0 sudo[181510]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:39 compute-0 sudo[181643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snskopvtfxmuuxuoghbhylosklpftuqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525978.9208102-1073-17522492213950/AnsiballZ_copy.py'
Jan 27 14:59:39 compute-0 sudo[181643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:40 compute-0 python3.9[181645]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769525978.9208102-1073-17522492213950/.source _original_basename=.231vbecx follow=False checksum=feab8e2909bb3ca7c9f9e585c9cc35aa804bee2f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 27 14:59:40 compute-0 sudo[181643]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:40 compute-0 python3.9[181797]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:59:41 compute-0 python3.9[181949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:59:42 compute-0 python3.9[182070]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525981.2053733-1099-32616689599543/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:43 compute-0 python3.9[182220]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 14:59:43 compute-0 python3.9[182341]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769525982.482178-1114-122963335603276/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 14:59:44 compute-0 sudo[182491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvgtegqbkhlheumxxqvskudybmrxprek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525984.141452-1131-109830716377179/AnsiballZ_container_config_data.py'
Jan 27 14:59:44 compute-0 sudo[182491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:44 compute-0 python3.9[182493]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 27 14:59:45 compute-0 sudo[182491]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:45 compute-0 sudo[182643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpytuwlygdlsncydubembywbzidcxayp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525985.3091981-1142-42706711189008/AnsiballZ_container_config_hash.py'
Jan 27 14:59:45 compute-0 sudo[182643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:46 compute-0 python3.9[182645]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 14:59:46 compute-0 sudo[182643]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:47 compute-0 sudo[182795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiqkbaxbzzuamgxoisqbpjtahzvosqlc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769525986.425069-1152-40546146851099/AnsiballZ_edpm_container_manage.py'
Jan 27 14:59:47 compute-0 sudo[182795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:47 compute-0 python3[182797]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 14:59:47 compute-0 podman[182834]: 2026-01-27 14:59:47.518777938 +0000 UTC m=+0.028483069 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 27 14:59:48 compute-0 podman[182834]: 2026-01-27 14:59:48.187518081 +0000 UTC m=+0.697223132 container create 82eaec9b2fdaa3dd8f56a80188999ba89ba9c59c9aae67be6a315dd95ae04688 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init)
Jan 27 14:59:48 compute-0 python3[182797]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 27 14:59:48 compute-0 sudo[182795]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:48 compute-0 sudo[183022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pexrjihceszsjzquvhiesoyabjhztdrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525988.5011475-1160-187847427741041/AnsiballZ_stat.py'
Jan 27 14:59:48 compute-0 sudo[183022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:49 compute-0 python3.9[183024]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:59:49 compute-0 sudo[183022]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:49 compute-0 sudo[183176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yahwzfqqxfauxpgljufsgnnxnpvjlwxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525989.6301246-1172-181865512761778/AnsiballZ_container_config_data.py'
Jan 27 14:59:49 compute-0 sudo[183176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:50 compute-0 python3.9[183178]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 27 14:59:50 compute-0 sudo[183176]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:50 compute-0 sudo[183328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alffduujlboegxztzppzeddyaseoysmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525990.5684016-1183-279113669760394/AnsiballZ_container_config_hash.py'
Jan 27 14:59:50 compute-0 sudo[183328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:51 compute-0 python3.9[183330]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 14:59:51 compute-0 sudo[183328]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:51 compute-0 sudo[183480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgptwlikdveskslcespwfzdtexequgfb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769525991.4586523-1193-3687405157230/AnsiballZ_edpm_container_manage.py'
Jan 27 14:59:51 compute-0 sudo[183480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:52 compute-0 python3[183482]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 14:59:52 compute-0 podman[183519]: 2026-01-27 14:59:52.257632087 +0000 UTC m=+0.026806143 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 27 14:59:52 compute-0 podman[183519]: 2026-01-27 14:59:52.891490181 +0000 UTC m=+0.660664177 container create 90f8d770ed567ec3c8e2476dbcda6d7ea4c502b769cddd27df7bb2f6acdbe785 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 14:59:52 compute-0 python3[183482]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 27 14:59:53 compute-0 sudo[183480]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:53 compute-0 sudo[183707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nndihdxamzozkffntuzxxulhfrgjppns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525993.2811348-1201-25898403877358/AnsiballZ_stat.py'
Jan 27 14:59:53 compute-0 sudo[183707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:53 compute-0 python3.9[183709]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 14:59:53 compute-0 sudo[183707]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:54 compute-0 sudo[183861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npkgqcxowvkstqsmrqvxfshqwtvmebot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525994.1357443-1210-277965931555871/AnsiballZ_file.py'
Jan 27 14:59:54 compute-0 sudo[183861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:54 compute-0 python3.9[183863]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:59:54 compute-0 sudo[183861]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:55 compute-0 sudo[184012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjkwojfiwpwgcdwlgiotkfdylflnvang ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525994.745575-1210-16366507880235/AnsiballZ_copy.py'
Jan 27 14:59:55 compute-0 sudo[184012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:55 compute-0 python3.9[184014]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769525994.745575-1210-16366507880235/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 14:59:55 compute-0 sudo[184012]: pam_unix(sudo:session): session closed for user root
Jan 27 14:59:55 compute-0 sudo[184088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llvnfgvneqwesbccmatgwgcrhxitlgel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525994.745575-1210-16366507880235/AnsiballZ_systemd.py'
Jan 27 14:59:55 compute-0 sudo[184088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 14:59:56 compute-0 python3.9[184090]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 14:59:56 compute-0 systemd[1]: Reloading.
Jan 27 14:59:57 compute-0 systemd-sysv-generator[184125]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 14:59:57 compute-0 systemd-rc-local-generator[184122]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:00:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:00:00.209 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:00:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:00:00.210 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:00:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:00:00.211 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:00:01 compute-0 sudo[184088]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:01 compute-0 sudo[184200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcmrrurgdbmdwrgezzxlezundsupjtdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769525994.745575-1210-16366507880235/AnsiballZ_systemd.py'
Jan 27 15:00:01 compute-0 sudo[184200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:02 compute-0 python3.9[184202]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 15:00:02 compute-0 systemd[1]: Reloading.
Jan 27 15:00:02 compute-0 systemd-rc-local-generator[184231]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:00:02 compute-0 systemd-sysv-generator[184234]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:00:02 compute-0 systemd[1]: Starting nova_compute container...
Jan 27 15:00:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa2c89a709acc6464aa058b74ca0857e9f9e0288f15ed98decf797bcd31e82c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 27 15:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa2c89a709acc6464aa058b74ca0857e9f9e0288f15ed98decf797bcd31e82c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 27 15:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa2c89a709acc6464aa058b74ca0857e9f9e0288f15ed98decf797bcd31e82c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 27 15:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa2c89a709acc6464aa058b74ca0857e9f9e0288f15ed98decf797bcd31e82c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 27 15:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa2c89a709acc6464aa058b74ca0857e9f9e0288f15ed98decf797bcd31e82c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 27 15:00:02 compute-0 podman[184241]: 2026-01-27 15:00:02.751997488 +0000 UTC m=+0.247784750 container init 90f8d770ed567ec3c8e2476dbcda6d7ea4c502b769cddd27df7bb2f6acdbe785 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 15:00:02 compute-0 podman[184241]: 2026-01-27 15:00:02.758912955 +0000 UTC m=+0.254700207 container start 90f8d770ed567ec3c8e2476dbcda6d7ea4c502b769cddd27df7bb2f6acdbe785 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 15:00:02 compute-0 nova_compute[184256]: + sudo -E kolla_set_configs
Jan 27 15:00:02 compute-0 podman[184241]: nova_compute
Jan 27 15:00:02 compute-0 systemd[1]: Started nova_compute container.
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Validating config file
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Copying service configuration files
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Deleting /etc/ceph
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Creating directory /etc/ceph
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Setting permission for /etc/ceph
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Writing out command to execute
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 15:00:02 compute-0 nova_compute[184256]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 15:00:02 compute-0 nova_compute[184256]: ++ cat /run_command
Jan 27 15:00:02 compute-0 sudo[184200]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:02 compute-0 nova_compute[184256]: + CMD=nova-compute
Jan 27 15:00:02 compute-0 nova_compute[184256]: + ARGS=
Jan 27 15:00:02 compute-0 nova_compute[184256]: + sudo kolla_copy_cacerts
Jan 27 15:00:02 compute-0 nova_compute[184256]: + [[ ! -n '' ]]
Jan 27 15:00:02 compute-0 nova_compute[184256]: + . kolla_extend_start
Jan 27 15:00:02 compute-0 nova_compute[184256]: + echo 'Running command: '\''nova-compute'\'''
Jan 27 15:00:02 compute-0 nova_compute[184256]: Running command: 'nova-compute'
Jan 27 15:00:02 compute-0 nova_compute[184256]: + umask 0022
Jan 27 15:00:02 compute-0 nova_compute[184256]: + exec nova-compute
Jan 27 15:00:03 compute-0 python3.9[184418]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:00:04 compute-0 python3.9[184568]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:00:04 compute-0 nova_compute[184256]: 2026-01-27 15:00:04.884 184260 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 15:00:04 compute-0 nova_compute[184256]: 2026-01-27 15:00:04.884 184260 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 15:00:04 compute-0 nova_compute[184256]: 2026-01-27 15:00:04.885 184260 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 15:00:04 compute-0 nova_compute[184256]: 2026-01-27 15:00:04.885 184260 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 27 15:00:05 compute-0 nova_compute[184256]: 2026-01-27 15:00:05.021 184260 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:00:05 compute-0 nova_compute[184256]: 2026-01-27 15:00:05.048 184260 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:00:05 compute-0 nova_compute[184256]: 2026-01-27 15:00:05.048 184260 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 27 15:00:05 compute-0 python3.9[184722]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.171 184260 INFO nova.virt.driver [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 27 15:00:06 compute-0 sudo[184886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnhfurulxougtbqqcqvggdgiczjdcdwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526005.5548337-1270-138645963775736/AnsiballZ_podman_container.py'
Jan 27 15:00:06 compute-0 sudo[184886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:06 compute-0 podman[184846]: 2026-01-27 15:00:06.220051048 +0000 UTC m=+0.064014537 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.300 184260 INFO nova.compute.provider_config [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.349 184260 DEBUG oslo_concurrency.lockutils [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.350 184260 DEBUG oslo_concurrency.lockutils [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.350 184260 DEBUG oslo_concurrency.lockutils [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.350 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.350 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.350 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.351 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.351 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.351 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.351 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.351 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.351 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.351 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.352 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.352 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.352 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.352 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.352 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.353 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.353 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.353 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.353 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.353 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.353 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.354 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.354 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.354 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.354 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.355 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.355 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.355 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.355 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.355 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.356 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.356 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.356 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.356 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.356 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.356 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.357 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.357 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.357 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.357 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.358 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.358 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.358 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.358 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.359 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.359 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.359 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.359 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.359 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.360 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.360 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.360 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.360 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.361 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.361 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.361 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.361 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.362 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.362 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.362 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.362 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.362 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.362 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.363 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.363 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.363 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.363 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.364 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.364 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.364 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.364 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.364 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.365 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.365 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.365 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.365 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.365 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.366 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.366 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.366 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.366 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.367 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.367 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.367 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.368 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.368 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.368 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.368 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.368 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.369 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.369 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.369 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.369 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.370 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.370 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.370 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.370 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.370 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.371 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.371 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.371 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.372 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.372 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.372 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.372 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.372 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.373 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.373 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.373 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.373 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.374 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.374 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.374 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.374 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.374 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.375 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.375 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.375 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.375 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.376 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.376 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.376 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.376 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.376 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.377 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.377 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.377 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.377 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.377 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.377 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.378 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.378 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.378 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.378 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.378 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.379 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.379 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.379 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.379 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.380 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.380 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.380 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.380 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.380 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.381 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.381 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.381 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.381 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.381 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.382 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.382 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.382 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.382 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.383 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.383 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.383 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.383 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.384 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.384 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.384 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.384 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.385 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.385 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.385 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.385 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.385 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.386 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.386 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.386 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.386 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.387 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.387 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.387 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.387 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.388 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.388 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.388 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.388 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.388 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.389 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.389 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.389 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.389 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.389 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.390 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.390 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.390 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.390 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.390 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.391 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.391 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.391 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.391 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.392 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.392 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.392 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.392 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.392 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.392 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.393 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.393 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.393 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.393 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.393 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.393 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.394 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.394 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.394 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.394 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.394 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.394 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.395 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.395 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.395 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.395 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.395 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.395 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.396 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.396 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.396 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.396 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.396 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.396 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.396 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.397 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.397 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.397 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.397 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.397 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.397 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.398 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.398 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.398 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.398 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.398 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.398 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.399 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.399 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.399 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.399 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.399 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.399 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.400 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.400 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.400 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.400 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.400 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.400 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.400 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.401 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.401 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.401 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.401 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.401 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.402 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.402 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.402 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.402 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.402 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.402 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.402 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.403 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.403 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.403 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.403 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.403 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.403 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.403 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.404 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.404 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.404 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.404 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.404 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.404 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.404 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.405 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.405 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.405 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.405 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.405 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.405 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.405 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.405 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.406 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.406 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.406 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.406 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.406 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.406 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.406 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.407 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.407 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.407 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.407 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.407 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.407 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.408 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.408 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.408 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.408 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.408 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.408 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.408 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.409 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.409 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.409 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.409 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.409 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.409 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.409 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.410 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.410 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.410 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.410 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.410 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.410 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.410 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.411 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.411 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.411 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.411 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.411 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.411 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.411 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.412 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.412 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.412 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.412 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.412 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.412 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.413 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.413 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.413 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.413 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.413 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.413 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.413 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.413 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.414 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.414 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.414 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.414 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.414 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.414 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.414 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.415 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.415 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.415 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.415 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.415 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.415 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.416 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.416 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.416 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.416 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.416 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.417 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.417 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.417 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.417 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.417 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.417 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.418 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.418 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.418 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.418 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.418 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.418 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.418 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.419 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.419 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.419 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.419 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.419 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.419 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.420 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.420 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.420 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.420 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.420 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.420 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.420 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.421 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.421 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.421 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.421 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.421 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.421 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.422 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.422 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.422 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.422 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.422 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.422 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.423 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.423 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.423 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.423 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.423 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.423 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.423 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.424 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.424 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.424 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.424 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.424 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.424 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.424 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.425 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.425 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.425 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.425 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.425 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.425 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.425 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.425 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.426 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.426 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.426 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.426 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.426 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.426 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.426 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.427 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.427 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.427 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.427 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.427 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.427 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.427 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.428 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.428 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.428 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.428 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.428 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.428 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.429 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.429 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.429 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.429 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.429 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.429 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.429 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.430 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.430 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.430 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.430 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.430 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.430 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.430 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.431 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.431 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.431 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.431 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.431 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.431 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.432 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.432 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.432 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.432 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.432 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.432 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.432 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.433 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.433 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.433 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.433 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.433 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.433 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.433 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.434 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.434 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.434 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.434 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.434 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.434 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.435 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.435 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.435 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.435 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.435 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.435 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.435 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.436 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.436 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.436 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.436 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.436 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.436 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.436 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.437 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.437 184260 WARNING oslo_config.cfg [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 27 15:00:06 compute-0 nova_compute[184256]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 27 15:00:06 compute-0 nova_compute[184256]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 27 15:00:06 compute-0 nova_compute[184256]: and ``live_migration_inbound_addr`` respectively.
Jan 27 15:00:06 compute-0 nova_compute[184256]: ).  Its value may be silently ignored in the future.
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.437 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.437 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.437 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.437 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.438 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.438 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.438 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.438 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.438 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.438 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.439 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.439 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.439 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.439 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.439 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.439 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.439 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.440 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.440 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.440 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.440 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.440 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.440 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.440 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.441 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.441 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.441 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.441 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.441 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.441 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.442 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.442 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.442 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.442 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.442 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.442 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.442 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.443 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.443 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.443 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.443 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.443 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.443 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.443 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.444 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.444 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.444 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.444 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.444 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.444 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.445 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.445 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.445 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.445 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.445 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.445 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.446 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.446 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.446 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.446 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.446 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.446 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.447 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.447 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.447 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.447 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.447 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.447 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.447 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.448 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.448 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.448 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.448 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.448 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.448 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.448 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.448 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.449 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.449 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.449 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.449 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.449 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.449 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.449 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.450 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.450 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.450 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.450 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.450 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.450 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.451 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.451 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.451 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.451 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.451 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.451 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.451 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.452 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.452 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.452 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.452 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.452 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.452 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.453 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.453 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.453 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.453 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.453 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.453 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.454 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.454 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.454 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.454 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.454 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.454 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.454 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.455 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.455 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.455 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.455 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.456 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.456 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.456 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.456 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.456 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.457 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.457 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.457 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.457 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.458 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.458 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.458 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.458 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.458 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.459 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.459 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.459 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.459 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.459 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.460 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.460 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.460 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.460 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.461 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.461 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.461 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.461 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.462 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.462 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.462 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.462 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.462 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.463 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.463 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.463 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.463 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.464 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.464 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.464 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.464 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.464 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.465 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.465 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.465 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.465 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 python3.9[184892]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.465 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.465 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.466 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.466 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.466 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.466 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.466 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.467 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.467 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.467 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.467 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.468 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.468 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.468 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.468 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.468 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.469 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.469 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.469 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.469 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.469 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.470 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.470 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.470 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.470 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.470 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.470 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.471 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.471 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.471 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.471 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.471 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.472 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.472 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.472 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.473 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.474 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.474 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.474 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.474 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.475 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.475 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.475 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.475 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.475 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.476 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.476 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.476 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.476 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.477 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.477 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.477 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.477 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.477 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.478 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.478 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.478 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.478 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.478 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.479 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.479 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.479 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.479 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.479 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.480 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.480 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.480 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.480 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.480 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.480 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.481 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.481 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.481 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.481 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.481 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.482 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.482 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.482 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.482 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.483 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.483 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.483 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.483 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.484 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.484 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.484 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.484 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.485 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.485 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.485 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.485 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.485 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.486 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.486 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.486 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.486 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.487 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.487 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.487 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.487 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.487 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.487 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.488 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.488 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.488 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.488 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.488 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.489 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.489 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.489 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.489 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.490 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.490 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.490 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.490 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.490 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.491 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.491 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.491 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.491 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.491 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.492 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.492 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.492 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.492 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.492 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.493 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.493 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.493 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.493 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.493 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.494 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.495 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.495 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.495 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.495 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.495 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.495 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.496 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.496 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.496 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.496 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.496 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.496 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.497 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.497 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.497 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.497 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.497 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.497 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.497 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.497 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.498 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.498 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.498 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.498 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.498 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.498 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.498 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.499 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.499 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.499 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.499 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.499 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.499 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.499 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.500 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.500 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.500 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.500 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.500 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.500 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.500 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.501 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.501 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.501 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.501 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.501 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.501 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.501 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.502 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.502 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.502 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.502 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.502 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.502 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.502 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.502 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.503 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.503 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.503 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.503 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.503 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.503 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.503 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.504 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.504 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.504 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.504 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.504 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.504 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.504 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.504 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.505 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.505 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.505 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.505 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.505 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.506 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.506 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.506 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.506 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.506 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.506 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.507 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.507 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.507 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.507 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.507 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.508 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.508 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.508 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.508 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.508 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.509 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.509 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.509 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.510 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.510 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.510 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.510 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.512 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.512 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.512 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.512 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.512 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.513 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.513 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.513 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.513 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.513 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.513 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.514 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.514 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.514 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.514 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.515 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.515 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.515 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.515 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.515 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.516 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.516 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.516 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.516 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.516 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.517 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.517 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.517 184260 DEBUG oslo_service.service [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.519 184260 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.544 184260 DEBUG nova.virt.libvirt.host [None req-831dd119-ab0e-46a7-b04f-3fb9905063ec - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.546 184260 DEBUG nova.virt.libvirt.host [None req-831dd119-ab0e-46a7-b04f-3fb9905063ec - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.546 184260 DEBUG nova.virt.libvirt.host [None req-831dd119-ab0e-46a7-b04f-3fb9905063ec - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.547 184260 DEBUG nova.virt.libvirt.host [None req-831dd119-ab0e-46a7-b04f-3fb9905063ec - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 27 15:00:06 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 27 15:00:06 compute-0 sudo[184886]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:06 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.625 184260 DEBUG nova.virt.libvirt.host [None req-831dd119-ab0e-46a7-b04f-3fb9905063ec - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd89a970b80> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.629 184260 DEBUG nova.virt.libvirt.host [None req-831dd119-ab0e-46a7-b04f-3fb9905063ec - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd89a970b80> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.630 184260 INFO nova.virt.libvirt.driver [None req-831dd119-ab0e-46a7-b04f-3fb9905063ec - - - - - -] Connection event '1' reason 'None'
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.676 184260 WARNING nova.virt.libvirt.driver [None req-831dd119-ab0e-46a7-b04f-3fb9905063ec - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 27 15:00:06 compute-0 nova_compute[184256]: 2026-01-27 15:00:06.676 184260 DEBUG nova.virt.libvirt.volume.mount [None req-831dd119-ab0e-46a7-b04f-3fb9905063ec - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 27 15:00:07 compute-0 sudo[185117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lekpvhultfpsnwuwrqfkutbyhvkxpnqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526006.8474162-1278-142833870674476/AnsiballZ_systemd.py'
Jan 27 15:00:07 compute-0 sudo[185117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:07 compute-0 python3.9[185121]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 15:00:07 compute-0 systemd[1]: Stopping nova_compute container...
Jan 27 15:00:07 compute-0 nova_compute[184256]: 2026-01-27 15:00:07.529 184260 INFO nova.virt.libvirt.host [None req-831dd119-ab0e-46a7-b04f-3fb9905063ec - - - - - -] Libvirt host capabilities <capabilities>
Jan 27 15:00:07 compute-0 nova_compute[184256]: 
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <host>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <uuid>72809274-cad7-4f43-9f08-53d26ac912a7</uuid>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <cpu>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <arch>x86_64</arch>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model>EPYC-Rome-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <vendor>AMD</vendor>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <microcode version='16777317'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <signature family='23' model='49' stepping='0'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='x2apic'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='tsc-deadline'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='osxsave'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='hypervisor'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='tsc_adjust'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='spec-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='stibp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='arch-capabilities'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='ssbd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='cmp_legacy'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='topoext'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='virt-ssbd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='lbrv'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='tsc-scale'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='vmcb-clean'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='pause-filter'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='pfthreshold'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='svme-addr-chk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='rdctl-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='skip-l1dfl-vmentry'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='mds-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature name='pschange-mc-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <pages unit='KiB' size='4'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <pages unit='KiB' size='2048'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <pages unit='KiB' size='1048576'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </cpu>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <power_management>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <suspend_mem/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <suspend_disk/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <suspend_hybrid/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </power_management>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <iommu support='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <migration_features>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <live/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <uri_transports>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <uri_transport>tcp</uri_transport>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <uri_transport>rdma</uri_transport>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </uri_transports>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </migration_features>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <topology>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <cells num='1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <cell id='0'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:           <memory unit='KiB'>7864316</memory>
Jan 27 15:00:07 compute-0 nova_compute[184256]:           <pages unit='KiB' size='4'>1966079</pages>
Jan 27 15:00:07 compute-0 nova_compute[184256]:           <pages unit='KiB' size='2048'>0</pages>
Jan 27 15:00:07 compute-0 nova_compute[184256]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 27 15:00:07 compute-0 nova_compute[184256]:           <distances>
Jan 27 15:00:07 compute-0 nova_compute[184256]:             <sibling id='0' value='10'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:           </distances>
Jan 27 15:00:07 compute-0 nova_compute[184256]:           <cpus num='8'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:           </cpus>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         </cell>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </cells>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </topology>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <cache>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </cache>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <secmodel>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model>selinux</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <doi>0</doi>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </secmodel>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <secmodel>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model>dac</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <doi>0</doi>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </secmodel>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   </host>
Jan 27 15:00:07 compute-0 nova_compute[184256]: 
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <guest>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <os_type>hvm</os_type>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <arch name='i686'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <wordsize>32</wordsize>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <domain type='qemu'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <domain type='kvm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </arch>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <features>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <pae/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <nonpae/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <acpi default='on' toggle='yes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <apic default='on' toggle='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <cpuselection/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <deviceboot/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <disksnapshot default='on' toggle='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <externalSnapshot/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </features>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   </guest>
Jan 27 15:00:07 compute-0 nova_compute[184256]: 
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <guest>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <os_type>hvm</os_type>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <arch name='x86_64'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <wordsize>64</wordsize>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <domain type='qemu'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <domain type='kvm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </arch>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <features>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <acpi default='on' toggle='yes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <apic default='on' toggle='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <cpuselection/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <deviceboot/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <disksnapshot default='on' toggle='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <externalSnapshot/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </features>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   </guest>
Jan 27 15:00:07 compute-0 nova_compute[184256]: 
Jan 27 15:00:07 compute-0 nova_compute[184256]: </capabilities>
Jan 27 15:00:07 compute-0 nova_compute[184256]: 
Jan 27 15:00:07 compute-0 nova_compute[184256]: 2026-01-27 15:00:07.537 184260 DEBUG nova.virt.libvirt.host [None req-831dd119-ab0e-46a7-b04f-3fb9905063ec - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 27 15:00:07 compute-0 nova_compute[184256]: 2026-01-27 15:00:07.559 184260 DEBUG nova.virt.libvirt.host [None req-831dd119-ab0e-46a7-b04f-3fb9905063ec - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 27 15:00:07 compute-0 nova_compute[184256]: <domainCapabilities>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <domain>kvm</domain>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <arch>i686</arch>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <vcpu max='4096'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <iothreads supported='yes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <os supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <enum name='firmware'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <loader supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='type'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>rom</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>pflash</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='readonly'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>yes</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>no</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='secure'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>no</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </loader>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   </os>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <cpu>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <mode name='host-passthrough' supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='hostPassthroughMigratable'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>on</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>off</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </mode>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <mode name='maximum' supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='maximumMigratable'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>on</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>off</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </mode>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <mode name='host-model' supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <vendor>AMD</vendor>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='x2apic'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='hypervisor'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='stibp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='ssbd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='overflow-recov'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='succor'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='ibrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='lbrv'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='tsc-scale'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='flushbyasid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='pause-filter'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='pfthreshold'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='disable' name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </mode>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <mode name='custom' supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Broadwell'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Broadwell-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Broadwell-noTSX'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Broadwell-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Broadwell-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Broadwell-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Broadwell-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cascadelake-Server'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='ClearwaterForest'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni-int16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bhi-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cmpccxadd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ddpd-u'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='intel-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='lam'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='prefetchiti'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sha512'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sm3'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sm4'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='ClearwaterForest-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni-int16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bhi-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cmpccxadd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ddpd-u'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='intel-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='lam'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='prefetchiti'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sha512'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sm3'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sm4'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cooperlake'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cooperlake-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cooperlake-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Denverton'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mpx'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Denverton-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mpx'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Denverton-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Denverton-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Dhyana-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Genoa'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amd-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='auto-ibrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='stibp-always-on'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amd-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='auto-ibrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='stibp-always-on'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amd-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='auto-ibrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='perfmon-v2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='stibp-always-on'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Milan'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Milan-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Milan-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amd-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='stibp-always-on'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Milan-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amd-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='stibp-always-on'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Rome'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Rome-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Rome-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Rome-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Turin'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amd-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='auto-ibrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vp2intersect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibpb-brtype'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='perfmon-v2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='prefetchi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbpb'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='srso-user-kernel-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='stibp-always-on'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Turin-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amd-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='auto-ibrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vp2intersect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibpb-brtype'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='perfmon-v2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='prefetchi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbpb'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='srso-user-kernel-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='stibp-always-on'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-v5'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='GraniteRapids'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='prefetchiti'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='GraniteRapids-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='prefetchiti'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='GraniteRapids-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx10'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx10-128'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx10-256'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx10-512'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='prefetchiti'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='GraniteRapids-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx10'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx10-128'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx10-256'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx10-512'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='prefetchiti'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Haswell'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Haswell-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Haswell-noTSX'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Haswell-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Haswell-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Haswell-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Haswell-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server-v5'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server-v6'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server-v7'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='IvyBridge'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='IvyBridge-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='IvyBridge-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='IvyBridge-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='KnightsMill'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-4fmaps'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-4vnniw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512er'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512pf'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='KnightsMill-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-4fmaps'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-4vnniw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512er'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512pf'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Opteron_G4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fma4'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xop'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Opteron_G4-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fma4'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xop'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Opteron_G5'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fma4'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tbm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xop'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Opteron_G5-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fma4'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tbm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xop'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SapphireRapids'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SapphireRapids-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SapphireRapids-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SapphireRapids-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SapphireRapids-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SierraForest'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cmpccxadd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SierraForest-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cmpccxadd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SierraForest-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cmpccxadd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='intel-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='lam'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SierraForest-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cmpccxadd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='intel-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='lam'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Client'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Client-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Client-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Client-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Client-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Server'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Server-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Server-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Server-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Server-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Server-v5'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Snowridge'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='core-capability'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mpx'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='split-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Snowridge-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='core-capability'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mpx'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='split-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Snowridge-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='core-capability'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='split-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Snowridge-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='core-capability'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='split-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Snowridge-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='athlon'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='3dnow'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='3dnowext'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='athlon-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='3dnow'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='3dnowext'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='core2duo'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='core2duo-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='coreduo'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='coreduo-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='n270'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='n270-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='phenom'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='3dnow'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='3dnowext'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='phenom-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='3dnow'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='3dnowext'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </mode>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   </cpu>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <memoryBacking supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <enum name='sourceType'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <value>file</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <value>anonymous</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <value>memfd</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   </memoryBacking>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <devices>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <disk supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='diskDevice'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>disk</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>cdrom</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>floppy</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>lun</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='bus'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>fdc</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>scsi</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtio</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>usb</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>sata</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='model'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtio</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtio-transitional</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtio-non-transitional</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </disk>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <graphics supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='type'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>vnc</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>egl-headless</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>dbus</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </graphics>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <video supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='modelType'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>vga</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>cirrus</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtio</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>none</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>bochs</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>ramfb</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </video>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <hostdev supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='mode'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>subsystem</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='startupPolicy'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>default</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>mandatory</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>requisite</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>optional</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='subsysType'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>usb</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>pci</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>scsi</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='capsType'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='pciBackend'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </hostdev>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <rng supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='model'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtio</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtio-transitional</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtio-non-transitional</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='backendModel'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>random</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>egd</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>builtin</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </rng>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <filesystem supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='driverType'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>path</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>handle</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtiofs</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </filesystem>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <tpm supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='model'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>tpm-tis</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>tpm-crb</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='backendModel'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>emulator</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>external</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='backendVersion'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>2.0</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </tpm>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <redirdev supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='bus'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>usb</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </redirdev>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <channel supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='type'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>pty</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>unix</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </channel>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <crypto supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='model'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='type'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>qemu</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='backendModel'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>builtin</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </crypto>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <interface supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='backendType'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>default</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>passt</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </interface>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <panic supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='model'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>isa</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>hyperv</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </panic>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <console supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='type'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>null</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>vc</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>pty</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>dev</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>file</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>pipe</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>stdio</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>udp</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>tcp</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>unix</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>qemu-vdagent</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>dbus</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </console>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   </devices>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <features>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <gic supported='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <vmcoreinfo supported='yes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <genid supported='yes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <backingStoreInput supported='yes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <backup supported='yes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <async-teardown supported='yes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <s390-pv supported='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <ps2 supported='yes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <tdx supported='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <sev supported='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <sgx supported='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <hyperv supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='features'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>relaxed</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>vapic</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>spinlocks</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>vpindex</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>runtime</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>synic</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>stimer</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>reset</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>vendor_id</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>frequencies</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>reenlightenment</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>tlbflush</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>ipi</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>avic</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>emsr_bitmap</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>xmm_input</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <defaults>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <spinlocks>4095</spinlocks>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <stimer_direct>on</stimer_direct>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </defaults>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </hyperv>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <launchSecurity supported='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   </features>
Jan 27 15:00:07 compute-0 nova_compute[184256]: </domainCapabilities>
Jan 27 15:00:07 compute-0 nova_compute[184256]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 15:00:07 compute-0 nova_compute[184256]: 2026-01-27 15:00:07.565 184260 DEBUG nova.virt.libvirt.host [None req-831dd119-ab0e-46a7-b04f-3fb9905063ec - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 27 15:00:07 compute-0 nova_compute[184256]: <domainCapabilities>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <domain>kvm</domain>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <arch>i686</arch>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <vcpu max='240'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <iothreads supported='yes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <os supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <enum name='firmware'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <loader supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='type'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>rom</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>pflash</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='readonly'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>yes</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>no</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='secure'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>no</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </loader>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   </os>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <cpu>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <mode name='host-passthrough' supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='hostPassthroughMigratable'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>on</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>off</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </mode>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <mode name='maximum' supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='maximumMigratable'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>on</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>off</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </mode>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <mode name='host-model' supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <vendor>AMD</vendor>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='x2apic'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='hypervisor'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='stibp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='ssbd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='overflow-recov'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='succor'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='ibrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='lbrv'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='tsc-scale'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='flushbyasid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='pause-filter'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='pfthreshold'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <feature policy='disable' name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </mode>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <mode name='custom' supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Broadwell'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Broadwell-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Broadwell-noTSX'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Broadwell-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Broadwell-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Broadwell-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Broadwell-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cascadelake-Server'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='ClearwaterForest'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni-int16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bhi-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cmpccxadd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ddpd-u'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='intel-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='lam'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='prefetchiti'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sha512'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sm3'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sm4'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='ClearwaterForest-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni-int16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bhi-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cmpccxadd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ddpd-u'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='intel-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='lam'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='prefetchiti'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sha512'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sm3'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sm4'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cooperlake'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cooperlake-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Cooperlake-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Denverton'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mpx'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Denverton-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mpx'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Denverton-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Denverton-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Dhyana-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Genoa'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amd-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='auto-ibrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='stibp-always-on'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amd-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='auto-ibrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='stibp-always-on'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amd-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='auto-ibrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='perfmon-v2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='stibp-always-on'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Milan'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Milan-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Milan-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amd-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='stibp-always-on'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Milan-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amd-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='stibp-always-on'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Rome'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Rome-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Rome-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Rome-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Turin'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amd-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='auto-ibrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vp2intersect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibpb-brtype'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='perfmon-v2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='prefetchi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbpb'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='srso-user-kernel-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='stibp-always-on'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-Turin-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amd-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='auto-ibrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vp2intersect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibpb-brtype'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='perfmon-v2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='prefetchi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbpb'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='srso-user-kernel-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='stibp-always-on'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='EPYC-v5'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='GraniteRapids'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='prefetchiti'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='GraniteRapids-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='prefetchiti'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='GraniteRapids-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx10'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx10-128'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx10-256'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx10-512'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='prefetchiti'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='GraniteRapids-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx10'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx10-128'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx10-256'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx10-512'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='prefetchiti'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Haswell'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Haswell-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Haswell-noTSX'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Haswell-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Haswell-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Haswell-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Haswell-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server-v5'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server-v6'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Icelake-Server-v7'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='IvyBridge'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='IvyBridge-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='IvyBridge-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='IvyBridge-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='KnightsMill'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-4fmaps'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-4vnniw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512er'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512pf'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='KnightsMill-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-4fmaps'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-4vnniw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512er'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512pf'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Opteron_G4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fma4'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xop'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Opteron_G4-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fma4'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xop'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Opteron_G5'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fma4'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tbm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xop'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Opteron_G5-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fma4'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tbm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xop'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SapphireRapids'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SapphireRapids-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SapphireRapids-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SapphireRapids-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SapphireRapids-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='amx-tile'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-bf16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-fp16'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bitalg'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrc'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fzrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='la57'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='taa-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SierraForest'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cmpccxadd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SierraForest-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cmpccxadd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SierraForest-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cmpccxadd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='intel-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='lam'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='SierraForest-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ifma'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cmpccxadd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fbsdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='fsrs'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ibrs-all'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='intel-psfd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='lam'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mcdt-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pbrsb-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='psdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='serialize'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vaes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Client'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Client-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Client-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Client-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Client-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Server'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Server-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Server-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='hle'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='rtm'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Server-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Server-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Skylake-Server-v5'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512bw'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512cd'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512dq'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512f'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='avx512vl'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='invpcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pcid'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='pku'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Snowridge'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='core-capability'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mpx'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='split-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Snowridge-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='core-capability'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='mpx'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='split-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Snowridge-v2'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='core-capability'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='split-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Snowridge-v3'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='core-capability'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='split-lock-detect'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='Snowridge-v4'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='cldemote'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='erms'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='gfni'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdir64b'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='movdiri'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='xsaves'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='athlon'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='3dnow'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='3dnowext'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='athlon-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='3dnow'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='3dnowext'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='core2duo'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='core2duo-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='coreduo'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='coreduo-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='n270'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='n270-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='ss'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='phenom'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='3dnow'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='3dnowext'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <blockers model='phenom-v1'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='3dnow'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <feature name='3dnowext'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </blockers>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </mode>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   </cpu>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <memoryBacking supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <enum name='sourceType'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <value>file</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <value>anonymous</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <value>memfd</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   </memoryBacking>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <devices>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <disk supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='diskDevice'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>disk</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>cdrom</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>floppy</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>lun</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='bus'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>ide</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>fdc</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>scsi</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtio</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>usb</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>sata</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='model'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtio</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtio-transitional</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtio-non-transitional</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </disk>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <graphics supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='type'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>vnc</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>egl-headless</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>dbus</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </graphics>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <video supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='modelType'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>vga</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>cirrus</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtio</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>none</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>bochs</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>ramfb</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </video>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <hostdev supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='mode'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>subsystem</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='startupPolicy'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>default</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>mandatory</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>requisite</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>optional</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='subsysType'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>usb</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>pci</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>scsi</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='capsType'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='pciBackend'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </hostdev>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <rng supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='model'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtio</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtio-transitional</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtio-non-transitional</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='backendModel'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>random</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>egd</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>builtin</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </rng>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <filesystem supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='driverType'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>path</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>handle</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>virtiofs</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </filesystem>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <tpm supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='model'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>tpm-tis</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>tpm-crb</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='backendModel'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>emulator</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>external</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='backendVersion'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>2.0</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </tpm>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <redirdev supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='bus'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>usb</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </redirdev>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <channel supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='type'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>pty</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>unix</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </channel>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <crypto supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='model'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='type'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>qemu</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='backendModel'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>builtin</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </crypto>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <interface supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='backendType'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>default</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>passt</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </interface>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <panic supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='model'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>isa</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>hyperv</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </panic>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <console supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='type'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>null</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>vc</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>pty</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>dev</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>file</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>pipe</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>stdio</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>udp</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>tcp</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>unix</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>qemu-vdagent</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>dbus</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </console>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   </devices>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   <features>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <gic supported='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <vmcoreinfo supported='yes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <genid supported='yes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <backingStoreInput supported='yes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <backup supported='yes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <async-teardown supported='yes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <s390-pv supported='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <ps2 supported='yes'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <tdx supported='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <sev supported='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <sgx supported='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <hyperv supported='yes'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <enum name='features'>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>relaxed</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>vapic</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>spinlocks</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>vpindex</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>runtime</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>synic</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>stimer</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>reset</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>vendor_id</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>frequencies</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>reenlightenment</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>tlbflush</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>ipi</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>avic</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>emsr_bitmap</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <value>xmm_input</value>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </enum>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       <defaults>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <spinlocks>4095</spinlocks>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <stimer_direct>on</stimer_direct>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 15:00:07 compute-0 nova_compute[184256]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 15:00:07 compute-0 nova_compute[184256]:       </defaults>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     </hyperv>
Jan 27 15:00:07 compute-0 nova_compute[184256]:     <launchSecurity supported='no'/>
Jan 27 15:00:07 compute-0 nova_compute[184256]:   </features>
Jan 27 15:00:07 compute-0 nova_compute[184256]: </domainCapabilities>
Jan 27 15:00:07 compute-0 nova_compute[184256]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 15:00:07 compute-0 nova_compute[184256]: 2026-01-27 15:00:07.618 184260 DEBUG nova.virt.libvirt.host [None req-831dd119-ab0e-46a7-b04f-3fb9905063ec - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 27 15:00:07 compute-0 nova_compute[184256]: 2026-01-27 15:00:07.619 184260 DEBUG oslo_concurrency.lockutils [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:00:07 compute-0 nova_compute[184256]: 2026-01-27 15:00:07.619 184260 DEBUG oslo_concurrency.lockutils [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:00:07 compute-0 nova_compute[184256]: 2026-01-27 15:00:07.620 184260 DEBUG oslo_concurrency.lockutils [None req-18a8c828-3f1b-4e9b-987e-cc847f4cfda8 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:00:08 compute-0 virtqemud[184937]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 27 15:00:08 compute-0 virtqemud[184937]: hostname: compute-0
Jan 27 15:00:08 compute-0 virtqemud[184937]: End of file while reading data: Input/output error
Jan 27 15:00:08 compute-0 systemd[1]: libpod-90f8d770ed567ec3c8e2476dbcda6d7ea4c502b769cddd27df7bb2f6acdbe785.scope: Deactivated successfully.
Jan 27 15:00:08 compute-0 systemd[1]: libpod-90f8d770ed567ec3c8e2476dbcda6d7ea4c502b769cddd27df7bb2f6acdbe785.scope: Consumed 3.204s CPU time.
Jan 27 15:00:08 compute-0 podman[185131]: 2026-01-27 15:00:08.140305652 +0000 UTC m=+0.612862499 container died 90f8d770ed567ec3c8e2476dbcda6d7ea4c502b769cddd27df7bb2f6acdbe785 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:00:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90f8d770ed567ec3c8e2476dbcda6d7ea4c502b769cddd27df7bb2f6acdbe785-userdata-shm.mount: Deactivated successfully.
Jan 27 15:00:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0aa2c89a709acc6464aa058b74ca0857e9f9e0288f15ed98decf797bcd31e82c-merged.mount: Deactivated successfully.
Jan 27 15:00:08 compute-0 podman[185131]: 2026-01-27 15:00:08.291530518 +0000 UTC m=+0.764087345 container cleanup 90f8d770ed567ec3c8e2476dbcda6d7ea4c502b769cddd27df7bb2f6acdbe785 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 15:00:08 compute-0 podman[185131]: nova_compute
Jan 27 15:00:08 compute-0 podman[185162]: nova_compute
Jan 27 15:00:08 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 27 15:00:08 compute-0 systemd[1]: Stopped nova_compute container.
Jan 27 15:00:08 compute-0 systemd[1]: Starting nova_compute container...
Jan 27 15:00:08 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa2c89a709acc6464aa058b74ca0857e9f9e0288f15ed98decf797bcd31e82c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 27 15:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa2c89a709acc6464aa058b74ca0857e9f9e0288f15ed98decf797bcd31e82c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 27 15:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa2c89a709acc6464aa058b74ca0857e9f9e0288f15ed98decf797bcd31e82c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 27 15:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa2c89a709acc6464aa058b74ca0857e9f9e0288f15ed98decf797bcd31e82c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 27 15:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa2c89a709acc6464aa058b74ca0857e9f9e0288f15ed98decf797bcd31e82c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 27 15:00:08 compute-0 podman[185175]: 2026-01-27 15:00:08.591254986 +0000 UTC m=+0.207516244 container init 90f8d770ed567ec3c8e2476dbcda6d7ea4c502b769cddd27df7bb2f6acdbe785 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 15:00:08 compute-0 podman[185175]: 2026-01-27 15:00:08.59845765 +0000 UTC m=+0.214718888 container start 90f8d770ed567ec3c8e2476dbcda6d7ea4c502b769cddd27df7bb2f6acdbe785 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 15:00:08 compute-0 nova_compute[185191]: + sudo -E kolla_set_configs
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Validating config file
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Copying service configuration files
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Deleting /etc/ceph
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Creating directory /etc/ceph
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Setting permission for /etc/ceph
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Writing out command to execute
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 15:00:08 compute-0 nova_compute[185191]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 15:00:08 compute-0 nova_compute[185191]: ++ cat /run_command
Jan 27 15:00:08 compute-0 nova_compute[185191]: + CMD=nova-compute
Jan 27 15:00:08 compute-0 nova_compute[185191]: + ARGS=
Jan 27 15:00:08 compute-0 nova_compute[185191]: + sudo kolla_copy_cacerts
Jan 27 15:00:08 compute-0 nova_compute[185191]: + [[ ! -n '' ]]
Jan 27 15:00:08 compute-0 nova_compute[185191]: + . kolla_extend_start
Jan 27 15:00:08 compute-0 nova_compute[185191]: + echo 'Running command: '\''nova-compute'\'''
Jan 27 15:00:08 compute-0 nova_compute[185191]: Running command: 'nova-compute'
Jan 27 15:00:08 compute-0 nova_compute[185191]: + umask 0022
Jan 27 15:00:08 compute-0 nova_compute[185191]: + exec nova-compute
Jan 27 15:00:08 compute-0 podman[185175]: nova_compute
Jan 27 15:00:08 compute-0 systemd[1]: Started nova_compute container.
Jan 27 15:00:08 compute-0 sudo[185117]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:09 compute-0 sudo[185352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqeqehhezmjmnfpnsdftsgqaimpjqyyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526008.917444-1287-279645032430820/AnsiballZ_podman_container.py'
Jan 27 15:00:09 compute-0 sudo[185352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:09 compute-0 python3.9[185354]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 27 15:00:09 compute-0 systemd[1]: Started libpod-conmon-82eaec9b2fdaa3dd8f56a80188999ba89ba9c59c9aae67be6a315dd95ae04688.scope.
Jan 27 15:00:09 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e603a8fd2c5d5b560e13de99f5340ce745fac04da279b112526d37c88773ef2e/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 27 15:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e603a8fd2c5d5b560e13de99f5340ce745fac04da279b112526d37c88773ef2e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 27 15:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e603a8fd2c5d5b560e13de99f5340ce745fac04da279b112526d37c88773ef2e/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 27 15:00:09 compute-0 podman[185393]: 2026-01-27 15:00:09.856684771 +0000 UTC m=+0.215838678 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 27 15:00:10 compute-0 podman[185380]: 2026-01-27 15:00:10.05735802 +0000 UTC m=+0.474832529 container init 82eaec9b2fdaa3dd8f56a80188999ba89ba9c59c9aae67be6a315dd95ae04688 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, container_name=nova_compute_init, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 15:00:10 compute-0 podman[185380]: 2026-01-27 15:00:10.067684638 +0000 UTC m=+0.485159067 container start 82eaec9b2fdaa3dd8f56a80188999ba89ba9c59c9aae67be6a315dd95ae04688 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init)
Jan 27 15:00:10 compute-0 nova_compute_init[185428]: INFO:nova_statedir:Applying nova statedir ownership
Jan 27 15:00:10 compute-0 nova_compute_init[185428]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 27 15:00:10 compute-0 nova_compute_init[185428]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 27 15:00:10 compute-0 nova_compute_init[185428]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 27 15:00:10 compute-0 nova_compute_init[185428]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 27 15:00:10 compute-0 nova_compute_init[185428]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 27 15:00:10 compute-0 nova_compute_init[185428]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 27 15:00:10 compute-0 nova_compute_init[185428]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 27 15:00:10 compute-0 nova_compute_init[185428]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 27 15:00:10 compute-0 nova_compute_init[185428]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 27 15:00:10 compute-0 nova_compute_init[185428]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 27 15:00:10 compute-0 nova_compute_init[185428]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 27 15:00:10 compute-0 nova_compute_init[185428]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 27 15:00:10 compute-0 nova_compute_init[185428]: INFO:nova_statedir:Nova statedir ownership complete
Jan 27 15:00:10 compute-0 systemd[1]: libpod-82eaec9b2fdaa3dd8f56a80188999ba89ba9c59c9aae67be6a315dd95ae04688.scope: Deactivated successfully.
Jan 27 15:00:10 compute-0 conmon[185413]: conmon 82eaec9b2fdaa3dd8f56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82eaec9b2fdaa3dd8f56a80188999ba89ba9c59c9aae67be6a315dd95ae04688.scope/container/memory.events
Jan 27 15:00:10 compute-0 python3.9[185354]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 27 15:00:10 compute-0 podman[185429]: 2026-01-27 15:00:10.194793414 +0000 UTC m=+0.049560697 container died 82eaec9b2fdaa3dd8f56a80188999ba89ba9c59c9aae67be6a315dd95ae04688 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 27 15:00:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-82eaec9b2fdaa3dd8f56a80188999ba89ba9c59c9aae67be6a315dd95ae04688-userdata-shm.mount: Deactivated successfully.
Jan 27 15:00:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e603a8fd2c5d5b560e13de99f5340ce745fac04da279b112526d37c88773ef2e-merged.mount: Deactivated successfully.
Jan 27 15:00:10 compute-0 podman[185429]: 2026-01-27 15:00:10.661341448 +0000 UTC m=+0.516108691 container cleanup 82eaec9b2fdaa3dd8f56a80188999ba89ba9c59c9aae67be6a315dd95ae04688 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 27 15:00:10 compute-0 systemd[1]: libpod-conmon-82eaec9b2fdaa3dd8f56a80188999ba89ba9c59c9aae67be6a315dd95ae04688.scope: Deactivated successfully.
Jan 27 15:00:10 compute-0 sudo[185352]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:10 compute-0 nova_compute[185191]: 2026-01-27 15:00:10.771 185195 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 15:00:10 compute-0 nova_compute[185191]: 2026-01-27 15:00:10.772 185195 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 15:00:10 compute-0 nova_compute[185191]: 2026-01-27 15:00:10.772 185195 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 15:00:10 compute-0 nova_compute[185191]: 2026-01-27 15:00:10.773 185195 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 27 15:00:10 compute-0 nova_compute[185191]: 2026-01-27 15:00:10.919 185195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:00:10 compute-0 nova_compute[185191]: 2026-01-27 15:00:10.933 185195 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:00:10 compute-0 nova_compute[185191]: 2026-01-27 15:00:10.934 185195 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 27 15:00:11 compute-0 sshd-session[162118]: Connection closed by 192.168.122.30 port 48402
Jan 27 15:00:11 compute-0 sshd-session[162115]: pam_unix(sshd:session): session closed for user zuul
Jan 27 15:00:11 compute-0 systemd-logind[820]: Session 24 logged out. Waiting for processes to exit.
Jan 27 15:00:11 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Jan 27 15:00:11 compute-0 systemd[1]: session-24.scope: Consumed 1min 37.973s CPU time.
Jan 27 15:00:11 compute-0 systemd-logind[820]: Removed session 24.
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.463 185195 INFO nova.virt.driver [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.568 185195 INFO nova.compute.provider_config [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.598 185195 DEBUG oslo_concurrency.lockutils [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.598 185195 DEBUG oslo_concurrency.lockutils [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.598 185195 DEBUG oslo_concurrency.lockutils [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.599 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.599 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.599 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.599 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.599 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.600 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.600 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.600 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.600 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.600 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.601 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.601 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.601 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.601 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.601 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.601 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.602 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.602 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.602 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.602 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.602 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.603 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.603 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.603 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.603 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.603 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.604 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.604 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.604 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.604 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.604 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.605 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.605 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.605 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.605 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.605 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.606 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.606 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.606 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.606 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.606 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.606 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.607 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.607 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.607 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.607 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.607 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.608 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.608 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.608 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.608 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.608 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.609 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.609 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.609 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.609 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.609 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.609 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.610 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.610 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.610 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.610 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.610 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.610 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.611 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.611 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.611 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.611 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.611 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.612 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.612 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.612 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.612 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.612 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.612 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.613 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.613 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.613 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.613 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.613 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.614 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.614 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.614 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.614 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.614 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.615 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.615 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.615 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.615 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.615 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.615 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.616 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.616 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.616 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.616 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.616 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.616 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.617 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.617 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.617 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.617 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.617 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.618 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.618 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.618 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.618 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.618 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.618 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.619 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.619 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.619 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.619 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.619 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.620 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.620 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.620 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.620 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.620 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.620 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.621 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.621 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.621 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.621 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.621 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.621 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.622 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.622 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.622 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.622 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.622 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.622 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.623 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.623 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.623 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.623 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.623 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.624 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.624 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.624 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.624 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.624 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.624 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.625 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.625 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.625 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.625 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.625 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.625 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.626 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.626 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.626 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.626 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.626 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.627 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.627 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.627 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.627 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.627 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.628 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.628 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.628 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.628 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.628 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.629 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.629 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.629 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.629 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.629 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.629 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.630 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.630 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.630 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.630 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.630 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.631 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.631 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.631 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.631 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.631 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.632 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.632 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.632 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.632 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.632 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.632 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.633 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.633 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.633 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.633 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.633 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.634 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.634 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.634 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.634 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.634 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.634 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.635 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.635 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.635 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.635 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.635 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.636 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.636 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.636 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.636 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.636 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.636 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.637 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.637 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.637 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.637 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.637 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.638 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.638 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.638 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.638 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.638 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.638 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.639 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.639 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.639 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.639 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.639 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.640 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.640 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.640 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.640 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.640 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.640 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.641 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.641 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.641 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.641 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.642 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.642 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.642 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.642 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.642 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.642 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.643 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.643 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.643 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.643 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.643 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.644 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.644 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.644 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.644 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.644 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.644 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.645 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.645 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.645 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.645 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.646 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.646 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.646 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.646 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.646 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.646 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.647 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.647 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.647 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.647 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.647 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.648 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.648 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.648 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.648 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.648 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.648 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.649 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.649 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.649 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.649 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.649 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.650 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.650 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.650 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.650 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.650 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.650 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.651 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.651 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.651 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.651 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.651 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.652 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.652 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.652 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.652 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.652 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.652 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.653 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.653 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.653 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.653 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.653 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.654 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.654 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.654 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.654 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.654 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.655 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.655 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.655 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.655 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.655 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.655 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.656 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.656 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.656 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.656 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.656 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.657 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.657 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.657 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.657 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.657 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.657 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.658 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.658 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.658 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.658 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.658 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.659 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.659 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.659 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.659 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.659 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.659 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.660 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.660 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.660 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.660 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.660 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.661 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.661 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.661 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.661 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.661 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.661 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.662 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.662 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.662 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.662 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.662 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.663 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.663 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.663 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.663 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.664 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.664 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.664 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.665 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.665 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.665 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.665 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.666 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.666 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.666 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.666 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.666 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.667 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.667 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.667 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.667 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.667 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.667 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.668 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.668 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.668 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.668 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.668 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.668 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.669 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.669 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.669 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.669 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.669 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.669 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.670 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.670 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.670 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.670 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.670 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.671 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.671 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.671 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.671 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.671 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.672 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.672 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.672 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.672 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.672 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.672 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.673 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.673 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.673 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.673 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.673 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.674 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.674 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.674 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.674 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.674 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.674 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.675 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.675 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.675 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.675 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.675 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.676 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.676 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.676 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.676 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.676 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.676 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.677 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.677 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.677 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.677 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.677 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.677 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.678 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.678 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.678 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.678 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.678 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.679 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.679 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.679 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.679 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.679 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.679 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.680 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.680 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.680 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.680 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.680 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.681 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.681 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.681 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.681 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.681 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.681 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.682 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.682 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.682 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.682 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.682 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.683 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.683 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.683 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.683 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.683 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.683 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.684 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.684 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.684 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.684 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.684 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.685 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.685 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.685 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.685 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.685 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.685 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.686 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.686 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.686 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.686 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.686 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.687 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.687 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.687 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.687 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.687 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.687 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.688 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.688 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.688 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.688 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.688 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.689 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.689 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.689 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.689 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.689 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.690 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.690 185195 WARNING oslo_config.cfg [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 27 15:00:11 compute-0 nova_compute[185191]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 27 15:00:11 compute-0 nova_compute[185191]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 27 15:00:11 compute-0 nova_compute[185191]: and ``live_migration_inbound_addr`` respectively.
Jan 27 15:00:11 compute-0 nova_compute[185191]: ).  Its value may be silently ignored in the future.
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.690 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.690 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.690 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.691 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.691 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.691 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.691 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.691 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.692 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.692 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.692 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.692 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.692 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.693 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.693 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.693 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.693 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.693 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.693 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.694 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.694 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.694 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.694 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.694 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.695 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.695 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.695 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.695 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.695 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.696 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.696 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.696 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.696 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.696 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.697 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.697 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.697 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.697 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.697 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.697 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.698 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.698 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.698 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.698 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.698 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.699 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.699 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.699 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.699 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.699 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.700 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.700 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.700 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.700 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.700 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.700 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.701 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.701 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.701 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.701 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.701 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.702 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.702 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.702 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.702 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.702 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.702 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.703 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.703 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.703 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.703 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.703 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.703 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.704 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.704 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.704 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.704 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.704 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.705 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.705 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.705 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.705 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.705 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.705 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.706 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.706 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.706 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.706 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.706 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.707 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.707 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.707 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.707 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.707 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.708 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.708 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.708 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.708 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.708 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.708 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.709 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.709 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.709 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.709 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.709 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.709 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.710 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.710 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.710 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.710 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.710 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.711 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.711 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.711 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.711 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.711 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.711 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.712 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.712 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.712 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.712 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.712 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.713 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.713 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.713 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.713 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.713 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.713 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.714 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.714 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.714 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.714 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.714 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.715 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.715 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.715 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.715 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.715 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.715 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.716 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.716 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.716 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.716 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.717 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.717 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.717 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.717 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.717 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.717 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.718 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.718 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.718 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.718 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.718 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.719 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.719 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.719 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.719 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.719 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.719 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.720 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.720 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.720 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.720 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.720 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.721 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.721 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.721 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.721 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.721 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.721 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.722 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.722 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.722 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.722 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.722 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.723 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.723 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.723 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.723 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.723 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.724 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.724 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.724 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.724 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.724 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.724 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.725 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.725 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.725 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.725 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.725 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.726 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.726 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.726 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.726 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.726 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.727 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.727 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.727 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.727 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.727 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.727 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.728 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.728 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.728 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.728 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.728 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.728 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.729 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.729 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.729 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.729 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.729 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.730 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.730 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.730 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.730 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.730 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.730 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.731 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.731 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.731 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.731 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.731 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.732 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.732 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.732 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.732 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.732 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.732 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.733 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.733 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.733 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.733 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.733 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.733 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.734 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.734 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.734 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.734 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.734 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.735 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.735 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.735 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.735 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.735 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.735 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.736 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.736 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.736 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.736 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.737 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.737 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.737 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.737 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.737 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.738 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.738 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.738 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.738 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.738 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.738 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.739 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.739 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.739 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.739 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.739 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.739 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.740 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.740 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.740 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.740 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.741 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.741 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.741 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.741 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.741 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.741 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.742 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.742 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.742 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.742 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.742 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.743 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.743 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.743 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.743 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.743 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.744 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.744 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.744 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.744 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.744 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.745 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.745 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.745 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.745 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.745 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.746 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.746 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.746 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.746 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.746 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.747 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.747 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.747 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.747 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.747 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.747 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.748 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.748 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.748 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.748 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.748 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.749 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.749 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.749 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.749 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.749 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.749 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.750 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.750 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.750 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.750 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.750 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.751 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.751 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.751 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.751 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.751 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.752 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.752 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.752 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.752 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.752 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.752 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.753 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.753 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.753 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.753 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.753 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.754 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.754 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.754 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.754 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.754 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.754 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.755 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.755 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.755 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.755 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.755 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.756 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.756 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.756 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.756 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.756 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.756 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.756 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.757 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.757 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.757 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.757 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.757 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.757 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.757 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.757 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.758 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.758 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.758 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.758 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.758 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.758 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.759 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.759 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.759 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.759 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.759 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.759 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.759 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.760 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.760 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.760 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.760 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.760 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.760 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.760 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.761 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.761 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.761 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.761 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.761 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.761 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.761 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.762 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.762 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.762 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.762 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.762 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.762 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.762 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.763 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.763 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.763 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.763 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.763 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.763 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.763 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.764 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.764 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.764 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.764 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.764 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.764 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.764 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.764 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.765 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.765 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.765 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.765 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.765 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.765 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.765 185195 DEBUG oslo_service.service [None req-6c14c0f0-50c2-453f-a839-cc8c728c68a4 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.766 185195 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.788 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.789 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.789 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.789 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.803 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f3b96392d90> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.806 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f3b96392d90> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.807 185195 INFO nova.virt.libvirt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Connection event '1' reason 'None'
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.814 185195 INFO nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Libvirt host capabilities <capabilities>
Jan 27 15:00:11 compute-0 nova_compute[185191]: 
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <host>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <uuid>72809274-cad7-4f43-9f08-53d26ac912a7</uuid>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <cpu>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <arch>x86_64</arch>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model>EPYC-Rome-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <vendor>AMD</vendor>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <microcode version='16777317'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <signature family='23' model='49' stepping='0'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='x2apic'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='tsc-deadline'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='osxsave'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='hypervisor'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='tsc_adjust'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='spec-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='stibp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='arch-capabilities'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='ssbd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='cmp_legacy'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='topoext'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='virt-ssbd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='lbrv'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='tsc-scale'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='vmcb-clean'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='pause-filter'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='pfthreshold'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='svme-addr-chk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='rdctl-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='skip-l1dfl-vmentry'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='mds-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature name='pschange-mc-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <pages unit='KiB' size='4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <pages unit='KiB' size='2048'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <pages unit='KiB' size='1048576'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </cpu>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <power_management>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <suspend_mem/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <suspend_disk/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <suspend_hybrid/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </power_management>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <iommu support='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <migration_features>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <live/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <uri_transports>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <uri_transport>tcp</uri_transport>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <uri_transport>rdma</uri_transport>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </uri_transports>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </migration_features>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <topology>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <cells num='1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <cell id='0'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:           <memory unit='KiB'>7864316</memory>
Jan 27 15:00:11 compute-0 nova_compute[185191]:           <pages unit='KiB' size='4'>1966079</pages>
Jan 27 15:00:11 compute-0 nova_compute[185191]:           <pages unit='KiB' size='2048'>0</pages>
Jan 27 15:00:11 compute-0 nova_compute[185191]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 27 15:00:11 compute-0 nova_compute[185191]:           <distances>
Jan 27 15:00:11 compute-0 nova_compute[185191]:             <sibling id='0' value='10'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:           </distances>
Jan 27 15:00:11 compute-0 nova_compute[185191]:           <cpus num='8'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:           </cpus>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         </cell>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </cells>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </topology>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <cache>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </cache>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <secmodel>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model>selinux</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <doi>0</doi>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </secmodel>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <secmodel>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model>dac</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <doi>0</doi>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </secmodel>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   </host>
Jan 27 15:00:11 compute-0 nova_compute[185191]: 
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <guest>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <os_type>hvm</os_type>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <arch name='i686'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <wordsize>32</wordsize>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <domain type='qemu'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <domain type='kvm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </arch>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <features>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <pae/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <nonpae/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <acpi default='on' toggle='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <apic default='on' toggle='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <cpuselection/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <deviceboot/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <disksnapshot default='on' toggle='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <externalSnapshot/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </features>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   </guest>
Jan 27 15:00:11 compute-0 nova_compute[185191]: 
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <guest>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <os_type>hvm</os_type>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <arch name='x86_64'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <wordsize>64</wordsize>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <domain type='qemu'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <domain type='kvm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </arch>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <features>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <acpi default='on' toggle='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <apic default='on' toggle='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <cpuselection/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <deviceboot/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <disksnapshot default='on' toggle='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <externalSnapshot/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </features>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   </guest>
Jan 27 15:00:11 compute-0 nova_compute[185191]: 
Jan 27 15:00:11 compute-0 nova_compute[185191]: </capabilities>
Jan 27 15:00:11 compute-0 nova_compute[185191]: 
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.821 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.825 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 27 15:00:11 compute-0 nova_compute[185191]: <domainCapabilities>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <domain>kvm</domain>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <arch>i686</arch>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <vcpu max='240'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <iothreads supported='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <os supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <enum name='firmware'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <loader supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>rom</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>pflash</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='readonly'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>yes</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>no</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='secure'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>no</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </loader>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   </os>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <cpu>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <mode name='host-passthrough' supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='hostPassthroughMigratable'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>on</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>off</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </mode>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <mode name='maximum' supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='maximumMigratable'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>on</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>off</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </mode>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <mode name='host-model' supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <vendor>AMD</vendor>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='x2apic'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='hypervisor'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='stibp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='ssbd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='overflow-recov'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='succor'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='ibrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='lbrv'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='tsc-scale'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='flushbyasid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='pause-filter'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='pfthreshold'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='disable' name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </mode>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <mode name='custom' supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-noTSX'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='ClearwaterForest'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bhi-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ddpd-u'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='intel-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='lam'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sha512'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sm3'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sm4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='ClearwaterForest-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bhi-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ddpd-u'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='intel-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='lam'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sha512'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sm3'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sm4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cooperlake'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cooperlake-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cooperlake-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Denverton'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mpx'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Denverton-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mpx'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Denverton-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Denverton-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Dhyana-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Genoa'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='perfmon-v2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Milan'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Milan-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Milan-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Milan-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Rome'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Rome-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Rome-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Rome-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Turin'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vp2intersect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibpb-brtype'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='perfmon-v2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbpb'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='srso-user-kernel-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Turin-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vp2intersect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibpb-brtype'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='perfmon-v2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbpb'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='srso-user-kernel-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-v5'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='GraniteRapids'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='GraniteRapids-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='GraniteRapids-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx10'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx10-128'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx10-256'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx10-512'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='GraniteRapids-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx10'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx10-128'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx10-256'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx10-512'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Haswell'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Haswell-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Haswell-noTSX'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Haswell-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Haswell-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Haswell-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Haswell-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v5'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v6'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v7'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='IvyBridge'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='IvyBridge-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='IvyBridge-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='IvyBridge-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='KnightsMill'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-4fmaps'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-4vnniw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512er'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512pf'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='KnightsMill-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-4fmaps'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-4vnniw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512er'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512pf'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Opteron_G4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fma4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xop'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Opteron_G4-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fma4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xop'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Opteron_G5'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fma4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tbm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xop'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Opteron_G5-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fma4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tbm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xop'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SierraForest'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SierraForest-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SierraForest-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='intel-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='lam'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SierraForest-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='intel-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='lam'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v5'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Snowridge'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='core-capability'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mpx'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='split-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Snowridge-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='core-capability'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mpx'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='split-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Snowridge-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='core-capability'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='split-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Snowridge-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='core-capability'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='split-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Snowridge-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='athlon'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='3dnow'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='3dnowext'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='athlon-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='3dnow'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='3dnowext'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='core2duo'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='core2duo-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='coreduo'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='coreduo-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='n270'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='n270-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='phenom'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='3dnow'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='3dnowext'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='phenom-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='3dnow'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='3dnowext'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </mode>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <memoryBacking supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <enum name='sourceType'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <value>file</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <value>anonymous</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <value>memfd</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   </memoryBacking>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <disk supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='diskDevice'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>disk</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>cdrom</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>floppy</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>lun</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='bus'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>ide</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>fdc</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>scsi</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtio</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>usb</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>sata</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='model'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtio</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtio-transitional</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtio-non-transitional</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <graphics supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>vnc</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>egl-headless</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>dbus</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </graphics>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <video supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='modelType'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>vga</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>cirrus</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtio</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>none</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>bochs</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>ramfb</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </video>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <hostdev supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='mode'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>subsystem</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='startupPolicy'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>default</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>mandatory</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>requisite</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>optional</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='subsysType'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>usb</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>pci</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>scsi</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='capsType'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='pciBackend'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </hostdev>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <rng supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='model'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtio</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtio-transitional</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtio-non-transitional</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='backendModel'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>random</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>egd</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>builtin</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <filesystem supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='driverType'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>path</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>handle</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtiofs</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </filesystem>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <tpm supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='model'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>tpm-tis</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>tpm-crb</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='backendModel'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>emulator</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>external</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='backendVersion'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>2.0</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </tpm>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <redirdev supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='bus'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>usb</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </redirdev>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <channel supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>pty</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>unix</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </channel>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <crypto supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='model'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>qemu</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='backendModel'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>builtin</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </crypto>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <interface supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='backendType'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>default</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>passt</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <panic supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='model'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>isa</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>hyperv</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </panic>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <console supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>null</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>vc</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>pty</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>dev</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>file</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>pipe</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>stdio</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>udp</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>tcp</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>unix</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>qemu-vdagent</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>dbus</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </console>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <features>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <gic supported='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <vmcoreinfo supported='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <genid supported='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <backingStoreInput supported='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <backup supported='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <async-teardown supported='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <s390-pv supported='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <ps2 supported='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <tdx supported='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <sev supported='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <sgx supported='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <hyperv supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='features'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>relaxed</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>vapic</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>spinlocks</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>vpindex</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>runtime</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>synic</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>stimer</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>reset</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>vendor_id</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>frequencies</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>reenlightenment</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>tlbflush</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>ipi</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>avic</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>emsr_bitmap</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>xmm_input</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <defaults>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <spinlocks>4095</spinlocks>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <stimer_direct>on</stimer_direct>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </defaults>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </hyperv>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <launchSecurity supported='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   </features>
Jan 27 15:00:11 compute-0 nova_compute[185191]: </domainCapabilities>
Jan 27 15:00:11 compute-0 nova_compute[185191]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.835 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 27 15:00:11 compute-0 nova_compute[185191]: <domainCapabilities>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <domain>kvm</domain>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <arch>i686</arch>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <vcpu max='4096'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <iothreads supported='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <os supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <enum name='firmware'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <loader supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>rom</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>pflash</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='readonly'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>yes</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>no</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='secure'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>no</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </loader>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   </os>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <cpu>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <mode name='host-passthrough' supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='hostPassthroughMigratable'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>on</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>off</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </mode>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <mode name='maximum' supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='maximumMigratable'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>on</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>off</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </mode>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <mode name='host-model' supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <vendor>AMD</vendor>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='x2apic'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='hypervisor'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='stibp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='ssbd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='overflow-recov'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='succor'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='ibrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='lbrv'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='tsc-scale'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='flushbyasid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='pause-filter'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='pfthreshold'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='disable' name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </mode>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <mode name='custom' supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-noTSX'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='ClearwaterForest'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bhi-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ddpd-u'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='intel-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='lam'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sha512'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sm3'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sm4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='ClearwaterForest-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bhi-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ddpd-u'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='intel-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='lam'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sha512'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sm3'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sm4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cooperlake'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cooperlake-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cooperlake-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Denverton'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mpx'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Denverton-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mpx'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Denverton-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Denverton-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Dhyana-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Genoa'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='perfmon-v2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Milan'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Milan-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Milan-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Milan-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Rome'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Rome-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Rome-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Rome-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Turin'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vp2intersect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibpb-brtype'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='perfmon-v2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbpb'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='srso-user-kernel-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Turin-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vp2intersect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibpb-brtype'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='perfmon-v2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbpb'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='srso-user-kernel-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-v5'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='GraniteRapids'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='GraniteRapids-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='GraniteRapids-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx10'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx10-128'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx10-256'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx10-512'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='GraniteRapids-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx10'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx10-128'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx10-256'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx10-512'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Haswell'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Haswell-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Haswell-noTSX'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Haswell-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Haswell-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Haswell-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Haswell-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v5'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v6'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v7'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='IvyBridge'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='IvyBridge-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='IvyBridge-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='IvyBridge-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='KnightsMill'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-4fmaps'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-4vnniw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512er'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512pf'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='KnightsMill-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-4fmaps'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-4vnniw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512er'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512pf'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Opteron_G4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fma4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xop'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Opteron_G4-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fma4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xop'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Opteron_G5'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fma4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tbm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xop'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Opteron_G5-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fma4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tbm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xop'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SierraForest'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SierraForest-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SierraForest-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='intel-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='lam'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='SierraForest-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='intel-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='lam'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v5'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Snowridge'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='core-capability'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mpx'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='split-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Snowridge-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='core-capability'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mpx'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='split-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Snowridge-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='core-capability'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='split-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Snowridge-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='core-capability'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='split-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Snowridge-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='athlon'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='3dnow'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='3dnowext'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='athlon-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='3dnow'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='3dnowext'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='core2duo'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='core2duo-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='coreduo'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='coreduo-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='n270'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='n270-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='phenom'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='3dnow'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='3dnowext'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='phenom-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='3dnow'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='3dnowext'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </mode>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <memoryBacking supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <enum name='sourceType'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <value>file</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <value>anonymous</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <value>memfd</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   </memoryBacking>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <disk supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='diskDevice'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>disk</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>cdrom</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>floppy</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>lun</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='bus'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>fdc</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>scsi</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtio</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>usb</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>sata</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='model'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtio</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtio-transitional</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtio-non-transitional</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <graphics supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>vnc</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>egl-headless</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>dbus</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </graphics>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <video supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='modelType'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>vga</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>cirrus</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtio</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>none</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>bochs</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>ramfb</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </video>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <hostdev supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='mode'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>subsystem</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='startupPolicy'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>default</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>mandatory</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>requisite</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>optional</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='subsysType'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>usb</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>pci</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>scsi</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='capsType'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='pciBackend'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </hostdev>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <rng supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='model'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtio</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtio-transitional</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtio-non-transitional</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='backendModel'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>random</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>egd</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>builtin</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <filesystem supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='driverType'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>path</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>handle</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>virtiofs</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </filesystem>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <tpm supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='model'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>tpm-tis</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>tpm-crb</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='backendModel'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>emulator</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>external</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='backendVersion'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>2.0</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </tpm>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <redirdev supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='bus'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>usb</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </redirdev>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <channel supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>pty</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>unix</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </channel>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <crypto supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='model'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>qemu</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='backendModel'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>builtin</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </crypto>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <interface supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='backendType'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>default</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>passt</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <panic supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='model'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>isa</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>hyperv</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </panic>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <console supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>null</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>vc</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>pty</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>dev</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>file</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>pipe</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>stdio</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>udp</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>tcp</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>unix</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>qemu-vdagent</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>dbus</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </console>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <features>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <gic supported='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <vmcoreinfo supported='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <genid supported='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <backingStoreInput supported='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <backup supported='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <async-teardown supported='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <s390-pv supported='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <ps2 supported='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <tdx supported='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <sev supported='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <sgx supported='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <hyperv supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='features'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>relaxed</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>vapic</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>spinlocks</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>vpindex</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>runtime</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>synic</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>stimer</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>reset</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>vendor_id</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>frequencies</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>reenlightenment</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>tlbflush</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>ipi</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>avic</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>emsr_bitmap</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>xmm_input</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <defaults>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <spinlocks>4095</spinlocks>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <stimer_direct>on</stimer_direct>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </defaults>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </hyperv>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <launchSecurity supported='no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   </features>
Jan 27 15:00:11 compute-0 nova_compute[185191]: </domainCapabilities>
Jan 27 15:00:11 compute-0 nova_compute[185191]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.901 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.903 185195 WARNING nova.virt.libvirt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.903 185195 DEBUG nova.virt.libvirt.volume.mount [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 27 15:00:11 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.907 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 27 15:00:11 compute-0 nova_compute[185191]: <domainCapabilities>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <domain>kvm</domain>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <arch>x86_64</arch>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <vcpu max='240'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <iothreads supported='yes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <os supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <enum name='firmware'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <loader supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>rom</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>pflash</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='readonly'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>yes</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>no</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='secure'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>no</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </loader>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   </os>
Jan 27 15:00:11 compute-0 nova_compute[185191]:   <cpu>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <mode name='host-passthrough' supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='hostPassthroughMigratable'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>on</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>off</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </mode>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <mode name='maximum' supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <enum name='maximumMigratable'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>on</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <value>off</value>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </mode>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <mode name='host-model' supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <vendor>AMD</vendor>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='x2apic'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='hypervisor'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='stibp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='ssbd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='overflow-recov'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='succor'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='ibrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='lbrv'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='tsc-scale'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='flushbyasid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='pause-filter'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='pfthreshold'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <feature policy='disable' name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     </mode>
Jan 27 15:00:11 compute-0 nova_compute[185191]:     <mode name='custom' supported='yes'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-noTSX'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Broadwell-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='ClearwaterForest'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bhi-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ddpd-u'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='intel-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='lam'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sha512'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sm3'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sm4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='ClearwaterForest-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bhi-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ddpd-u'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='intel-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='lam'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sha512'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sm3'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='sm4'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cooperlake'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cooperlake-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Cooperlake-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Denverton'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mpx'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Denverton-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='mpx'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Denverton-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Denverton-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='Dhyana-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Genoa'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='perfmon-v2'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Milan'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Milan-v1'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Milan-v2'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 15:00:11 compute-0 nova_compute[185191]:       <blockers model='EPYC-Milan-v3'>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:11 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Rome'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Rome-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Rome-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Rome-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Turin'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vp2intersect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibpb-brtype'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='perfmon-v2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='prefetchi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbpb'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='srso-user-kernel-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Turin-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vp2intersect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibpb-brtype'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='perfmon-v2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='prefetchi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbpb'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='srso-user-kernel-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-v4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-v5'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='GraniteRapids'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='GraniteRapids-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='GraniteRapids-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx10'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx10-128'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx10-256'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx10-512'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='GraniteRapids-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx10'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx10-128'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx10-256'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx10-512'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Haswell'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Haswell-IBRS'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Haswell-noTSX'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Haswell-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Haswell-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Haswell-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Haswell-v4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v5'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v6'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v7'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='IvyBridge'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='IvyBridge-IBRS'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='IvyBridge-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='IvyBridge-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='KnightsMill'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-4fmaps'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-4vnniw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512er'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512pf'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='KnightsMill-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-4fmaps'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-4vnniw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512er'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512pf'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Opteron_G4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fma4'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xop'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Opteron_G4-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fma4'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xop'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Opteron_G5'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fma4'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tbm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xop'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Opteron_G5-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fma4'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tbm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xop'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids-v4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SierraForest'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SierraForest-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SierraForest-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='intel-psfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='lam'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SierraForest-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='intel-psfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='lam'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-v4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v5'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Snowridge'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='core-capability'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mpx'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='split-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Snowridge-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='core-capability'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mpx'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='split-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Snowridge-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='core-capability'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='split-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Snowridge-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='core-capability'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='split-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Snowridge-v4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='athlon'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='3dnow'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='3dnowext'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='athlon-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='3dnow'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='3dnowext'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='core2duo'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='core2duo-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='coreduo'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='coreduo-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='n270'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='n270-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='phenom'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='3dnow'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='3dnowext'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='phenom-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='3dnow'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='3dnowext'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </mode>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   <memoryBacking supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <enum name='sourceType'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <value>file</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <value>anonymous</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <value>memfd</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   </memoryBacking>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <disk supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='diskDevice'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>disk</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>cdrom</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>floppy</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>lun</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='bus'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>ide</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>fdc</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>scsi</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtio</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>usb</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>sata</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='model'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtio</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtio-transitional</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtio-non-transitional</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <graphics supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>vnc</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>egl-headless</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>dbus</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </graphics>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <video supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='modelType'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>vga</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>cirrus</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtio</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>none</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>bochs</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>ramfb</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </video>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <hostdev supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='mode'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>subsystem</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='startupPolicy'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>default</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>mandatory</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>requisite</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>optional</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='subsysType'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>usb</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>pci</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>scsi</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='capsType'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='pciBackend'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </hostdev>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <rng supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='model'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtio</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtio-transitional</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtio-non-transitional</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='backendModel'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>random</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>egd</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>builtin</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <filesystem supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='driverType'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>path</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>handle</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtiofs</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </filesystem>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <tpm supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='model'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>tpm-tis</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>tpm-crb</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='backendModel'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>emulator</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>external</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='backendVersion'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>2.0</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </tpm>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <redirdev supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='bus'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>usb</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </redirdev>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <channel supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>pty</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>unix</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </channel>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <crypto supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='model'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>qemu</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='backendModel'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>builtin</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </crypto>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <interface supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='backendType'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>default</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>passt</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <panic supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='model'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>isa</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>hyperv</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </panic>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <console supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>null</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>vc</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>pty</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>dev</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>file</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>pipe</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>stdio</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>udp</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>tcp</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>unix</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>qemu-vdagent</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>dbus</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </console>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   <features>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <gic supported='no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <vmcoreinfo supported='yes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <genid supported='yes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <backingStoreInput supported='yes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <backup supported='yes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <async-teardown supported='yes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <s390-pv supported='no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <ps2 supported='yes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <tdx supported='no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <sev supported='no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <sgx supported='no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <hyperv supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='features'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>relaxed</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>vapic</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>spinlocks</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>vpindex</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>runtime</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>synic</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>stimer</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>reset</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>vendor_id</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>frequencies</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>reenlightenment</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>tlbflush</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>ipi</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>avic</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>emsr_bitmap</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>xmm_input</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <defaults>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <spinlocks>4095</spinlocks>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <stimer_direct>on</stimer_direct>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </defaults>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </hyperv>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <launchSecurity supported='no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   </features>
Jan 27 15:00:12 compute-0 nova_compute[185191]: </domainCapabilities>
Jan 27 15:00:12 compute-0 nova_compute[185191]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:11.986 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 27 15:00:12 compute-0 nova_compute[185191]: <domainCapabilities>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   <domain>kvm</domain>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   <arch>x86_64</arch>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   <vcpu max='4096'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   <iothreads supported='yes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   <os supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <enum name='firmware'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <value>efi</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <loader supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>rom</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>pflash</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='readonly'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>yes</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>no</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='secure'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>yes</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>no</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </loader>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   </os>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   <cpu>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <mode name='host-passthrough' supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='hostPassthroughMigratable'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>on</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>off</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </mode>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <mode name='maximum' supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='maximumMigratable'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>on</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>off</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </mode>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <mode name='host-model' supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <vendor>AMD</vendor>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='x2apic'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='hypervisor'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='stibp'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='ssbd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='overflow-recov'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='succor'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='ibrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='lbrv'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='tsc-scale'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='flushbyasid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='pause-filter'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='pfthreshold'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <feature policy='disable' name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </mode>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <mode name='custom' supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Broadwell'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Broadwell-IBRS'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Broadwell-noTSX'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Broadwell-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Broadwell-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Broadwell-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Broadwell-v4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='ClearwaterForest'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bhi-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ddpd-u'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='intel-psfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='lam'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sha512'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sm3'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sm4'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='ClearwaterForest-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bhi-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ddpd-u'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='intel-psfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='lam'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sha512'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sm3'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sm4'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Cooperlake'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Cooperlake-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Cooperlake-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Denverton'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mpx'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Denverton-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mpx'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Denverton-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Denverton-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Dhyana-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Genoa'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='perfmon-v2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Milan'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Milan-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Milan-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Milan-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Rome'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Rome-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Rome-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Rome-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Turin'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vp2intersect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibpb-brtype'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='perfmon-v2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='prefetchi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbpb'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='srso-user-kernel-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-Turin-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amd-psfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='auto-ibrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vp2intersect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fs-gs-base-ns'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibpb-brtype'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='no-nested-data-bp'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='null-sel-clr-base'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='perfmon-v2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='prefetchi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbpb'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='srso-user-kernel-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='stibp-always-on'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-v4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='EPYC-v5'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='GraniteRapids'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='GraniteRapids-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='GraniteRapids-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx10'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx10-128'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx10-256'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx10-512'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='GraniteRapids-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx10'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx10-128'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx10-256'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx10-512'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='prefetchiti'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Haswell'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Haswell-IBRS'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Haswell-noTSX'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Haswell-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Haswell-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Haswell-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Haswell-v4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v5'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v6'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Icelake-Server-v7'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='IvyBridge'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='IvyBridge-IBRS'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='IvyBridge-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='IvyBridge-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='KnightsMill'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-4fmaps'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-4vnniw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512er'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512pf'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='KnightsMill-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-4fmaps'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-4vnniw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512er'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512pf'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Opteron_G4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fma4'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xop'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Opteron_G4-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fma4'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xop'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Opteron_G5'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fma4'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tbm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xop'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Opteron_G5-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fma4'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tbm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xop'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SapphireRapids-v4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='amx-tile'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-bf16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-fp16'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512-vpopcntdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bitalg'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vbmi2'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrc'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fzrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='la57'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='taa-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='tsx-ldtrk'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SierraForest'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SierraForest-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SierraForest-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='intel-psfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='lam'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='SierraForest-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ifma'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-ne-convert'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx-vnni-int8'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bhi-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='bus-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cmpccxadd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fbsdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='fsrs'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ibrs-all'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='intel-psfd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ipred-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='lam'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mcdt-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pbrsb-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='psdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rrsba-ctrl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='sbdr-ssdp-no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='serialize'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vaes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='vpclmulqdq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Client-v4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='hle'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='rtm'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Skylake-Server-v5'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512bw'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512cd'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512dq'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512f'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='avx512vl'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='invpcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pcid'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='pku'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Snowridge'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='core-capability'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mpx'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='split-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Snowridge-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='core-capability'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='mpx'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='split-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Snowridge-v2'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='core-capability'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='split-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Snowridge-v3'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='core-capability'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='split-lock-detect'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='Snowridge-v4'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='cldemote'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='erms'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='gfni'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdir64b'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='movdiri'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='xsaves'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='athlon'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='3dnow'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='3dnowext'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='athlon-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='3dnow'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='3dnowext'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='core2duo'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='core2duo-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='coreduo'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='coreduo-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='n270'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='n270-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='ss'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='phenom'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='3dnow'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='3dnowext'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <blockers model='phenom-v1'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='3dnow'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <feature name='3dnowext'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </blockers>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </mode>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   <memoryBacking supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <enum name='sourceType'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <value>file</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <value>anonymous</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <value>memfd</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   </memoryBacking>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <disk supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='diskDevice'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>disk</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>cdrom</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>floppy</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>lun</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='bus'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>fdc</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>scsi</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtio</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>usb</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>sata</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='model'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtio</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtio-transitional</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtio-non-transitional</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <graphics supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>vnc</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>egl-headless</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>dbus</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </graphics>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <video supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='modelType'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>vga</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>cirrus</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtio</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>none</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>bochs</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>ramfb</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </video>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <hostdev supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='mode'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>subsystem</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='startupPolicy'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>default</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>mandatory</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>requisite</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>optional</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='subsysType'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>usb</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>pci</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>scsi</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='capsType'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='pciBackend'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </hostdev>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <rng supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='model'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtio</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtio-transitional</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtio-non-transitional</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='backendModel'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>random</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>egd</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>builtin</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <filesystem supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='driverType'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>path</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>handle</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>virtiofs</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </filesystem>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <tpm supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='model'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>tpm-tis</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>tpm-crb</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='backendModel'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>emulator</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>external</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='backendVersion'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>2.0</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </tpm>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <redirdev supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='bus'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>usb</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </redirdev>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <channel supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>pty</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>unix</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </channel>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <crypto supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='model'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>qemu</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='backendModel'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>builtin</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </crypto>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <interface supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='backendType'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>default</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>passt</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <panic supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='model'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>isa</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>hyperv</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </panic>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <console supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='type'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>null</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>vc</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>pty</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>dev</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>file</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>pipe</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>stdio</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>udp</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>tcp</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>unix</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>qemu-vdagent</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>dbus</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </console>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   <features>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <gic supported='no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <vmcoreinfo supported='yes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <genid supported='yes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <backingStoreInput supported='yes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <backup supported='yes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <async-teardown supported='yes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <s390-pv supported='no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <ps2 supported='yes'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <tdx supported='no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <sev supported='no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <sgx supported='no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <hyperv supported='yes'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <enum name='features'>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>relaxed</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>vapic</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>spinlocks</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>vpindex</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>runtime</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>synic</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>stimer</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>reset</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>vendor_id</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>frequencies</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>reenlightenment</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>tlbflush</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>ipi</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>avic</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>emsr_bitmap</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <value>xmm_input</value>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </enum>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       <defaults>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <spinlocks>4095</spinlocks>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <stimer_direct>on</stimer_direct>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 15:00:12 compute-0 nova_compute[185191]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 15:00:12 compute-0 nova_compute[185191]:       </defaults>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     </hyperv>
Jan 27 15:00:12 compute-0 nova_compute[185191]:     <launchSecurity supported='no'/>
Jan 27 15:00:12 compute-0 nova_compute[185191]:   </features>
Jan 27 15:00:12 compute-0 nova_compute[185191]: </domainCapabilities>
Jan 27 15:00:12 compute-0 nova_compute[185191]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.059 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.060 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.060 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.065 185195 INFO nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Secure Boot support detected
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.067 185195 INFO nova.virt.libvirt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.067 185195 INFO nova.virt.libvirt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.075 185195 DEBUG nova.virt.libvirt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.131 185195 INFO nova.virt.node [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Determined node identity dbf037fd-3291-487b-ae9c-69178dae2ebc from /var/lib/nova/compute_id
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.185 185195 WARNING nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Compute nodes ['dbf037fd-3291-487b-ae9c-69178dae2ebc'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.423 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.546 185195 WARNING nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.546 185195 DEBUG oslo_concurrency.lockutils [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.546 185195 DEBUG oslo_concurrency.lockutils [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.547 185195 DEBUG oslo_concurrency.lockutils [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.547 185195 DEBUG nova.compute.resource_tracker [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:00:12 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 27 15:00:12 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.832 185195 WARNING nova.virt.libvirt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.834 185195 DEBUG nova.compute.resource_tracker [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6036MB free_disk=72.64686965942383GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.834 185195 DEBUG oslo_concurrency.lockutils [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.834 185195 DEBUG oslo_concurrency.lockutils [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:00:12 compute-0 nova_compute[185191]: 2026-01-27 15:00:12.932 185195 WARNING nova.compute.resource_tracker [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] No compute node record for compute-0.ctlplane.example.com:dbf037fd-3291-487b-ae9c-69178dae2ebc: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host dbf037fd-3291-487b-ae9c-69178dae2ebc could not be found.
Jan 27 15:00:13 compute-0 nova_compute[185191]: 2026-01-27 15:00:13.013 185195 INFO nova.compute.resource_tracker [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: dbf037fd-3291-487b-ae9c-69178dae2ebc
Jan 27 15:00:13 compute-0 nova_compute[185191]: 2026-01-27 15:00:13.244 185195 DEBUG nova.compute.resource_tracker [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:00:13 compute-0 nova_compute[185191]: 2026-01-27 15:00:13.244 185195 DEBUG nova.compute.resource_tracker [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:00:14 compute-0 nova_compute[185191]: 2026-01-27 15:00:14.677 185195 INFO nova.scheduler.client.report [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [req-5b6d8967-434c-4e19-9043-7b144597a1e0] Created resource provider record via placement API for resource provider with UUID dbf037fd-3291-487b-ae9c-69178dae2ebc and name compute-0.ctlplane.example.com.
Jan 27 15:00:15 compute-0 nova_compute[185191]: 2026-01-27 15:00:15.100 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 27 15:00:15 compute-0 nova_compute[185191]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 27 15:00:15 compute-0 nova_compute[185191]: 2026-01-27 15:00:15.101 185195 INFO nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] kernel doesn't support AMD SEV
Jan 27 15:00:15 compute-0 nova_compute[185191]: 2026-01-27 15:00:15.101 185195 DEBUG nova.compute.provider_tree [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 15:00:15 compute-0 nova_compute[185191]: 2026-01-27 15:00:15.102 185195 DEBUG nova.virt.libvirt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:00:15 compute-0 nova_compute[185191]: 2026-01-27 15:00:15.251 185195 DEBUG nova.scheduler.client.report [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Updated inventory for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 27 15:00:15 compute-0 nova_compute[185191]: 2026-01-27 15:00:15.252 185195 DEBUG nova.compute.provider_tree [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Updating resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 27 15:00:15 compute-0 nova_compute[185191]: 2026-01-27 15:00:15.252 185195 DEBUG nova.compute.provider_tree [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 15:00:15 compute-0 nova_compute[185191]: 2026-01-27 15:00:15.426 185195 DEBUG nova.compute.provider_tree [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Updating resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 27 15:00:15 compute-0 nova_compute[185191]: 2026-01-27 15:00:15.468 185195 DEBUG nova.compute.resource_tracker [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:00:15 compute-0 nova_compute[185191]: 2026-01-27 15:00:15.469 185195 DEBUG oslo_concurrency.lockutils [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:00:15 compute-0 nova_compute[185191]: 2026-01-27 15:00:15.469 185195 DEBUG nova.service [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 27 15:00:15 compute-0 nova_compute[185191]: 2026-01-27 15:00:15.737 185195 DEBUG nova.service [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 27 15:00:15 compute-0 nova_compute[185191]: 2026-01-27 15:00:15.738 185195 DEBUG nova.servicegroup.drivers.db [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 27 15:00:17 compute-0 sshd-session[185540]: Accepted publickey for zuul from 192.168.122.30 port 48228 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 15:00:17 compute-0 systemd-logind[820]: New session 26 of user zuul.
Jan 27 15:00:17 compute-0 systemd[1]: Started Session 26 of User zuul.
Jan 27 15:00:17 compute-0 sshd-session[185540]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 15:00:18 compute-0 python3.9[185693]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 15:00:19 compute-0 sudo[185847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skgglapylivxidbzesahmfczmkkapkom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526018.6236336-31-268978744094681/AnsiballZ_systemd_service.py'
Jan 27 15:00:19 compute-0 sudo[185847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:19 compute-0 python3.9[185849]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 15:00:19 compute-0 systemd[1]: Reloading.
Jan 27 15:00:19 compute-0 systemd-rc-local-generator[185879]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:00:19 compute-0 systemd-sysv-generator[185884]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:00:19 compute-0 sudo[185847]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:20 compute-0 python3.9[186035]: ansible-ansible.builtin.service_facts Invoked
Jan 27 15:00:20 compute-0 network[186052]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 15:00:20 compute-0 network[186053]: 'network-scripts' will be removed from distribution in near future.
Jan 27 15:00:20 compute-0 network[186054]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 15:00:24 compute-0 sudo[186324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaclitqdrvxzchalcrdaovibsvgbzcbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526024.075592-50-86098356308165/AnsiballZ_systemd_service.py'
Jan 27 15:00:24 compute-0 sudo[186324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:24 compute-0 python3.9[186326]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 15:00:24 compute-0 sudo[186324]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:25 compute-0 sudo[186477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjmojxfavfehwhyxpbjoihecxwxdsnnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526025.0404892-60-205276540980855/AnsiballZ_file.py'
Jan 27 15:00:25 compute-0 sudo[186477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:25 compute-0 python3.9[186479]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:00:25 compute-0 sudo[186477]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:25 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 15:00:26 compute-0 sudo[186630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gillwqwleufnrxpmiovjmqyzhoiprmyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526025.8615785-68-120690084733897/AnsiballZ_file.py'
Jan 27 15:00:26 compute-0 sudo[186630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:26 compute-0 python3.9[186632]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:00:26 compute-0 sudo[186630]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:27 compute-0 sudo[186783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owtmnekdpzudzywridiwbxhfbmkghmji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526026.5435183-77-216149853840846/AnsiballZ_command.py'
Jan 27 15:00:27 compute-0 sudo[186783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:27 compute-0 python3.9[186785]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:00:27 compute-0 sudo[186783]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:28 compute-0 python3.9[186937]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 15:00:28 compute-0 sudo[187087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iflvceoukuitqfcantmjbbzfsnlxduey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526028.4678624-95-24722636354274/AnsiballZ_systemd_service.py'
Jan 27 15:00:28 compute-0 sudo[187087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:29 compute-0 python3.9[187089]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 15:00:29 compute-0 systemd[1]: Reloading.
Jan 27 15:00:29 compute-0 systemd-rc-local-generator[187114]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:00:29 compute-0 systemd-sysv-generator[187119]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:00:29 compute-0 sudo[187087]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:29 compute-0 sudo[187273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfroeehldthvduvqemflolbtiombtaxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526029.6289654-103-274361823975071/AnsiballZ_command.py'
Jan 27 15:00:29 compute-0 sudo[187273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:30 compute-0 python3.9[187275]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:00:30 compute-0 sudo[187273]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:30 compute-0 sudo[187426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smalshtxdsfavhwworwsttqncjdkmhrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526030.410335-112-83758535184391/AnsiballZ_file.py'
Jan 27 15:00:30 compute-0 sudo[187426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:30 compute-0 python3.9[187428]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:00:30 compute-0 sudo[187426]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:31 compute-0 python3.9[187578]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:00:32 compute-0 sudo[187730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cseikrxdmrebqmyiuttgdoespymfmnjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526032.1064649-128-232808772778005/AnsiballZ_group.py'
Jan 27 15:00:32 compute-0 sudo[187730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:32 compute-0 python3.9[187732]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Jan 27 15:00:32 compute-0 sudo[187730]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:33 compute-0 sudo[187882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqdbuylwucndmijsryimfraogtudoool ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526033.203588-139-140155657461922/AnsiballZ_getent.py'
Jan 27 15:00:33 compute-0 sudo[187882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:33 compute-0 python3.9[187884]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 27 15:00:33 compute-0 sudo[187882]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:34 compute-0 sudo[188035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdavxmbbwrombkbcamazmgebluruvwnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526034.047108-147-80647233640914/AnsiballZ_group.py'
Jan 27 15:00:34 compute-0 sudo[188035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:34 compute-0 python3.9[188037]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 15:00:34 compute-0 groupadd[188038]: group added to /etc/group: name=ceilometer, GID=42405
Jan 27 15:00:34 compute-0 groupadd[188038]: group added to /etc/gshadow: name=ceilometer
Jan 27 15:00:34 compute-0 groupadd[188038]: new group: name=ceilometer, GID=42405
Jan 27 15:00:34 compute-0 sudo[188035]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:35 compute-0 sudo[188193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frutnzydlfwcjzjllecfybcwfghrzjno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526034.8099697-155-243373462819619/AnsiballZ_user.py'
Jan 27 15:00:35 compute-0 sudo[188193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:35 compute-0 python3.9[188195]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 15:00:35 compute-0 useradd[188197]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Jan 27 15:00:35 compute-0 useradd[188197]: add 'ceilometer' to group 'libvirt'
Jan 27 15:00:35 compute-0 useradd[188197]: add 'ceilometer' to shadow group 'libvirt'
Jan 27 15:00:35 compute-0 sudo[188193]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:36 compute-0 podman[188327]: 2026-01-27 15:00:36.744368128 +0000 UTC m=+0.054125149 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 15:00:36 compute-0 python3.9[188363]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:00:37 compute-0 python3.9[188491]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769526036.4448652-181-220591955318702/.source.conf _original_basename=ceilometer.conf follow=False checksum=5c6a9288d15d1b05b1484826ce363ad306e9930c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:00:38 compute-0 python3.9[188641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:00:38 compute-0 python3.9[188762]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769526037.7348325-181-45358870167822/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:00:39 compute-0 python3.9[188912]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:00:39 compute-0 python3.9[189033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769526038.9753182-181-36666410360400/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:00:40 compute-0 podman[189034]: 2026-01-27 15:00:40.029593421 +0000 UTC m=+0.083448231 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible)
Jan 27 15:00:40 compute-0 python3.9[189209]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:00:41 compute-0 python3.9[189361]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:00:41 compute-0 python3.9[189513]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:00:42 compute-0 python3.9[189634]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769526041.3531263-240-139987465496787/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:00:42 compute-0 python3.9[189784]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:00:43 compute-0 python3.9[189905]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/openstack_network_exporter.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769526042.5270228-240-21524297698595/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=87dede51a10e22722618c1900db75cb764463d91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:00:44 compute-0 python3.9[190055]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:00:44 compute-0 python3.9[190176]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769526043.7200649-269-12383330732072/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:00:45 compute-0 python3.9[190326]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:00:45 compute-0 nova_compute[185191]: 2026-01-27 15:00:45.739 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:00:45 compute-0 nova_compute[185191]: 2026-01-27 15:00:45.768 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:00:45 compute-0 python3.9[190447]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769526044.9292192-285-252688388118453/.source.yaml _original_basename=node_exporter.yaml follow=False checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:00:46 compute-0 python3.9[190597]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:00:46 compute-0 python3.9[190718]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769526046.0556226-300-19863454802570/.source.yaml _original_basename=podman_exporter.yaml follow=False checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:00:47 compute-0 python3.9[190868]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:00:48 compute-0 python3.9[190989]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769526047.365033-315-175686162985427/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:00:48 compute-0 sudo[191139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmerxkjlasxeisugpbyrftueiapchrsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526048.5728395-330-201928717075844/AnsiballZ_file.py'
Jan 27 15:00:48 compute-0 sudo[191139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:49 compute-0 python3.9[191141]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:00:49 compute-0 sudo[191139]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:49 compute-0 sudo[191291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaghtvapwnjzbnkioubbqpluffgbnrhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526049.222311-338-86891596676990/AnsiballZ_file.py'
Jan 27 15:00:49 compute-0 sudo[191291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:49 compute-0 python3.9[191293]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:00:49 compute-0 sudo[191291]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:50 compute-0 python3.9[191443]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:00:51 compute-0 python3.9[191595]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:00:51 compute-0 python3.9[191747]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:00:52 compute-0 sudo[191899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dspjlbrfmhrnrvkhjcybdqzblvyyjala ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526051.930272-370-62245304200830/AnsiballZ_file.py'
Jan 27 15:00:52 compute-0 sudo[191899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:52 compute-0 python3.9[191901]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:00:52 compute-0 sudo[191899]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:52 compute-0 sudo[192051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeqwvbfoaahcrrrmybwqxiitmwntfmeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526052.5997558-378-268386602058238/AnsiballZ_systemd_service.py'
Jan 27 15:00:52 compute-0 sudo[192051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:53 compute-0 python3.9[192053]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 15:00:53 compute-0 systemd[1]: Reloading.
Jan 27 15:00:53 compute-0 systemd-rc-local-generator[192080]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:00:53 compute-0 systemd-sysv-generator[192084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:00:53 compute-0 systemd[1]: Listening on Podman API Socket.
Jan 27 15:00:53 compute-0 sudo[192051]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:54 compute-0 sudo[192242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njcyxkoahorooslfxwsuzeddowagrsqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526053.9122584-387-98874215519028/AnsiballZ_stat.py'
Jan 27 15:00:54 compute-0 sudo[192242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:54 compute-0 python3.9[192244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:00:54 compute-0 sudo[192242]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:54 compute-0 sudo[192365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzqustbhsuwejpegnlduwtcxhyuddrxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526053.9122584-387-98874215519028/AnsiballZ_copy.py'
Jan 27 15:00:54 compute-0 sudo[192365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:54 compute-0 python3.9[192367]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769526053.9122584-387-98874215519028/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:00:55 compute-0 sudo[192365]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:55 compute-0 sudo[192441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwbontunujopijmcivmxkkoqdudlsebz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526053.9122584-387-98874215519028/AnsiballZ_stat.py'
Jan 27 15:00:55 compute-0 sudo[192441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:55 compute-0 python3.9[192443]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:00:55 compute-0 sudo[192441]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:55 compute-0 sudo[192564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kelmcrzawwtwbagtlsggcpqrsbrsabus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526053.9122584-387-98874215519028/AnsiballZ_copy.py'
Jan 27 15:00:55 compute-0 sudo[192564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:56 compute-0 python3.9[192566]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769526053.9122584-387-98874215519028/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:00:56 compute-0 sudo[192564]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:56 compute-0 sudo[192716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkmktkrdzqpdgqusptppdnigpltrrszk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526056.609068-419-20104809890643/AnsiballZ_file.py'
Jan 27 15:00:56 compute-0 sudo[192716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:57 compute-0 python3.9[192718]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:00:57 compute-0 sudo[192716]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:57 compute-0 sudo[192868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnxmoytsozqmyvfojrkcbbjmhbsopcbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526057.3608687-427-241231893826366/AnsiballZ_file.py'
Jan 27 15:00:57 compute-0 sudo[192868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:57 compute-0 python3.9[192870]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:00:57 compute-0 sudo[192868]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:58 compute-0 sudo[193020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hysclowzuznkbwjixlrxmtmjkvsdmnsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526058.0423543-435-251125799627111/AnsiballZ_stat.py'
Jan 27 15:00:58 compute-0 sudo[193020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:58 compute-0 python3.9[193022]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:00:58 compute-0 sudo[193020]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:58 compute-0 sudo[193143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwfgayenrxwpwalwsdnwhokfdgilfgvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526058.0423543-435-251125799627111/AnsiballZ_copy.py'
Jan 27 15:00:58 compute-0 sudo[193143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:00:59 compute-0 python3.9[193145]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769526058.0423543-435-251125799627111/.source.json _original_basename=.afkifq43 follow=False checksum=ce2b0c83293a970bafffa087afa083dd7c93a79c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:00:59 compute-0 sudo[193143]: pam_unix(sudo:session): session closed for user root
Jan 27 15:00:59 compute-0 python3.9[193295]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:01:00.210 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:01:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:01:00.212 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:01:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:01:00.212 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:01:01 compute-0 CROND[193644]: (root) CMD (run-parts /etc/cron.hourly)
Jan 27 15:01:01 compute-0 run-parts[193647]: (/etc/cron.hourly) starting 0anacron
Jan 27 15:01:01 compute-0 run-parts[193653]: (/etc/cron.hourly) finished 0anacron
Jan 27 15:01:01 compute-0 CROND[193643]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 27 15:01:01 compute-0 sudo[193727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhmcixutqdknpjmjxnfqqbbzclvhlqnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526061.407793-475-259735429722259/AnsiballZ_container_config_data.py'
Jan 27 15:01:01 compute-0 sudo[193727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:02 compute-0 python3.9[193729]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_pattern=*.json debug=False
Jan 27 15:01:02 compute-0 sudo[193727]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:02 compute-0 sudo[193879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmcncfilvavqshtozvwqtxwcsparjtvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526062.4600267-486-92238487592999/AnsiballZ_container_config_hash.py'
Jan 27 15:01:02 compute-0 sudo[193879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:03 compute-0 python3.9[193881]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 15:01:03 compute-0 sudo[193879]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:03 compute-0 sudo[194031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgscpbswxfbnmpispvjotqycawsvfoet ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769526063.4457593-496-174641208906132/AnsiballZ_edpm_container_manage.py'
Jan 27 15:01:03 compute-0 sudo[194031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:04 compute-0 python3[194033]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_id=ceilometer_agent_compute config_overrides={} config_patterns=*.json containers=['ceilometer_agent_compute'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 15:01:04 compute-0 podman[194070]: 2026-01-27 15:01:04.517043588 +0000 UTC m=+0.101240610 container create 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, container_name=ceilometer_agent_compute, tcib_managed=true, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 15:01:04 compute-0 podman[194070]: 2026-01-27 15:01:04.444125272 +0000 UTC m=+0.028322374 image pull 784fb2adc2a024f7e3dc24a0780ee88d1dda9d64127026d21a9dba69f9a258da quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 27 15:01:04 compute-0 python3[194033]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535 --healthcheck-command /openstack/healthcheck compute --label config_id=ceilometer_agent_compute --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Jan 27 15:01:04 compute-0 sudo[194031]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:05 compute-0 sudo[194258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krlmnnroshsrqjifrscxedtubhwptwev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526064.829368-504-163351006694344/AnsiballZ_stat.py'
Jan 27 15:01:05 compute-0 sudo[194258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:05 compute-0 python3.9[194260]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:01:05 compute-0 sudo[194258]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:05 compute-0 sudo[194412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqfigyhnqklprjrgncwzmfaedyuvdgww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526065.5700562-513-117846806720630/AnsiballZ_file.py'
Jan 27 15:01:05 compute-0 sudo[194412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:06 compute-0 python3.9[194414]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:06 compute-0 sudo[194412]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:06 compute-0 sudo[194488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbknbujzvganegnueuryiqvjssbgikwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526065.5700562-513-117846806720630/AnsiballZ_stat.py'
Jan 27 15:01:06 compute-0 sudo[194488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:06 compute-0 python3.9[194490]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:01:06 compute-0 sudo[194488]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:07 compute-0 sudo[194648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okjyhytnmacomtehjpiwrugesriumvar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526066.5736601-513-74609916663311/AnsiballZ_copy.py'
Jan 27 15:01:07 compute-0 sudo[194648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:07 compute-0 podman[194613]: 2026-01-27 15:01:07.112934001 +0000 UTC m=+0.068074256 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 15:01:07 compute-0 python3.9[194656]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769526066.5736601-513-74609916663311/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:07 compute-0 sudo[194648]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:07 compute-0 sudo[194732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxtxhhqmweftgdbdddszclzwnqhrnbig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526066.5736601-513-74609916663311/AnsiballZ_systemd.py'
Jan 27 15:01:07 compute-0 sudo[194732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:08 compute-0 python3.9[194734]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 15:01:08 compute-0 systemd[1]: Reloading.
Jan 27 15:01:08 compute-0 systemd-rc-local-generator[194761]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:01:08 compute-0 systemd-sysv-generator[194765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:01:08 compute-0 sudo[194732]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:08 compute-0 sudo[194844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhiqrhztvsnnctcdwrceqfwykskfohvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526066.5736601-513-74609916663311/AnsiballZ_systemd.py'
Jan 27 15:01:08 compute-0 sudo[194844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:09 compute-0 python3.9[194846]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 15:01:09 compute-0 systemd[1]: Reloading.
Jan 27 15:01:09 compute-0 systemd-rc-local-generator[194876]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:01:09 compute-0 systemd-sysv-generator[194879]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:01:09 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Jan 27 15:01:09 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbdb8676ca0005e6b4cb8706114013710407ffcc386c54602f61689fe16fd21/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 15:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbdb8676ca0005e6b4cb8706114013710407ffcc386c54602f61689fe16fd21/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 15:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbdb8676ca0005e6b4cb8706114013710407ffcc386c54602f61689fe16fd21/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 27 15:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbdb8676ca0005e6b4cb8706114013710407ffcc386c54602f61689fe16fd21/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 27 15:01:09 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088.
Jan 27 15:01:09 compute-0 podman[194886]: 2026-01-27 15:01:09.539402319 +0000 UTC m=+0.137666912 container init 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: + sudo -E kolla_set_configs
Jan 27 15:01:09 compute-0 sudo[194908]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: sudo: unable to send audit message: Operation not permitted
Jan 27 15:01:09 compute-0 sudo[194908]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 15:01:09 compute-0 podman[194886]: 2026-01-27 15:01:09.571172385 +0000 UTC m=+0.169436948 container start 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052)
Jan 27 15:01:09 compute-0 podman[194886]: ceilometer_agent_compute
Jan 27 15:01:09 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: INFO:__main__:Validating config file
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: INFO:__main__:Copying service configuration files
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: INFO:__main__:Writing out command to execute
Jan 27 15:01:09 compute-0 sudo[194908]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:09 compute-0 sudo[194844]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: ++ cat /run_command
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: + ARGS=
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: + sudo kolla_copy_cacerts
Jan 27 15:01:09 compute-0 sudo[194930]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: sudo: unable to send audit message: Operation not permitted
Jan 27 15:01:09 compute-0 sudo[194930]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 15:01:09 compute-0 sudo[194930]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:09 compute-0 podman[194909]: 2026-01-27 15:01:09.650679278 +0000 UTC m=+0.066392670 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: + [[ ! -n '' ]]
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: + . kolla_extend_start
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: + umask 0022
Jan 27 15:01:09 compute-0 ceilometer_agent_compute[194902]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Jan 27 15:01:09 compute-0 systemd[1]: 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088-6171f51e35808055.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 15:01:09 compute-0 systemd[1]: 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088-6171f51e35808055.service: Failed with result 'exit-code'.
Jan 27 15:01:10 compute-0 podman[195049]: 2026-01-27 15:01:10.330356196 +0000 UTC m=+0.091654252 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:01:10 compute-0 python3.9[195105]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.529 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.529 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.530 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.530 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.530 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.530 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.530 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.530 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.530 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.531 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.531 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.531 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.531 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.531 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.531 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.531 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.531 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.532 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.532 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.532 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.532 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.532 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.532 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.532 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.533 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.533 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.533 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.533 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.533 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.533 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.533 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.533 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.533 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.534 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.534 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.534 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.534 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.534 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.534 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.534 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.534 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.534 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.534 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.534 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.534 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.535 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.535 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.535 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.535 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.535 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.535 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.535 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.535 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.535 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.535 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.535 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.536 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.536 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.536 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.536 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.536 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.536 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.536 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.536 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.536 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.536 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.536 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.537 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.537 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.537 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.537 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.537 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.537 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.537 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.537 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.537 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.537 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.537 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.538 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.538 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.538 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.538 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.538 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.538 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.538 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.538 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.538 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.538 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.539 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.539 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.539 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.539 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.539 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.539 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.539 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.539 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.539 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.539 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.539 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.540 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.540 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.540 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.540 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.540 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.540 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.540 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.540 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.541 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.541 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.541 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.541 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.541 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.541 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.541 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.541 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.541 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.541 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.541 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.542 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.542 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.542 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.542 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.542 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.542 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.542 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.542 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.542 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.542 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.542 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.543 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.543 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.543 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.543 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.543 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.543 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.543 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.543 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.544 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.544 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.544 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.544 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.544 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.544 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.544 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.544 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.545 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.545 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.545 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.545 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.545 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.545 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.545 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.545 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.565 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.566 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.566 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.566 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.566 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.566 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.566 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.567 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.567 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.567 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.567 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.567 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.567 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.567 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.567 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.567 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.567 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.567 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.567 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.568 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.568 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.568 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.568 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.568 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.568 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.568 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.568 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.568 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.568 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.568 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.568 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.568 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.569 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.569 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.569 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.569 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.569 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.569 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.569 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.569 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.569 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.569 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.569 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.569 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.569 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.569 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.569 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.570 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.570 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.570 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.570 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.570 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.570 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.570 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.570 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.570 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.570 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.570 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.570 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.570 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.570 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.570 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.571 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.571 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.571 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.571 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.571 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.571 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.571 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.571 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.571 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.571 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.571 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.571 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.571 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.571 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.572 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.572 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.572 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.572 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.572 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.572 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.572 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.572 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.572 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.572 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.572 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.572 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.572 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.573 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.573 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.573 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.573 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.573 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.573 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.573 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.573 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.573 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.573 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.573 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.573 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.573 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.573 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.573 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.574 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.574 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.574 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.574 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.574 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.574 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.574 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.574 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.574 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.574 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.574 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.574 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.574 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.574 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.574 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.575 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.575 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.575 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.575 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.575 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.575 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.575 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.575 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.575 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.575 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.575 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.575 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.575 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.575 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.575 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.576 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.576 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.576 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.576 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.576 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.576 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.576 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.576 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.576 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.576 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.576 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.576 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.576 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.576 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.577 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.577 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.577 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.577 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.577 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.577 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.579 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.581 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.582 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.796 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.806 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.806 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.806 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 27 15:01:10 compute-0 nova_compute[185191]: 2026-01-27 15:01:10.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:01:10 compute-0 nova_compute[185191]: 2026-01-27 15:01:10.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:01:10 compute-0 nova_compute[185191]: 2026-01-27 15:01:10.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:01:10 compute-0 nova_compute[185191]: 2026-01-27 15:01:10.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.948 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.949 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.949 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.949 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.949 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.949 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.949 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.949 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.950 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.950 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.950 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.950 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.950 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.950 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.950 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.950 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.950 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.951 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.951 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.951 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.951 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.951 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.951 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.951 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.951 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.952 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.952 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.952 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.952 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.952 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.952 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.952 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.952 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.952 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.952 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.953 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.953 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.953 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.953 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.953 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.953 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.953 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.953 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.953 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.953 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.954 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.954 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.954 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.954 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.954 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.954 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.954 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.954 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.954 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.954 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.954 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.955 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.955 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.955 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.955 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.955 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.955 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.955 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.955 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.955 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.956 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.956 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.956 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.956 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.956 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.956 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.956 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.956 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.956 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.956 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.956 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.956 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.957 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.957 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.957 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.957 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.957 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.957 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.957 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.957 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.957 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.958 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.958 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.958 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.958 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.958 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.958 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.958 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.958 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.958 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.958 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.958 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.958 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.958 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.958 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.959 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.959 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.959 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.959 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.959 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.959 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.959 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.959 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.959 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.959 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.959 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.960 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.960 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.960 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.960 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.960 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.960 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.960 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.960 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.960 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.960 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.960 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.960 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.960 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.960 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.960 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.961 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.961 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.961 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.961 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.961 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.961 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.961 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.961 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.961 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.961 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.961 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.961 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.961 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.962 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.962 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.962 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.962 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.962 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.962 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.962 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.962 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.962 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.962 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.963 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.963 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.963 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.963 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.963 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.963 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.963 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.963 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.963 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.963 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.963 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.964 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.964 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.964 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.964 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.964 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.964 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.964 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.964 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.964 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.967 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Jan 27 15:01:10 compute-0 nova_compute[185191]: 2026-01-27 15:01:10.977 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:01:10 compute-0 nova_compute[185191]: 2026-01-27 15:01:10.978 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:01:10 compute-0 nova_compute[185191]: 2026-01-27 15:01:10.978 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:01:10 compute-0 nova_compute[185191]: 2026-01-27 15:01:10.978 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:01:10 compute-0 nova_compute[185191]: 2026-01-27 15:01:10.979 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:01:10 compute-0 nova_compute[185191]: 2026-01-27 15:01:10.979 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:01:10 compute-0 nova_compute[185191]: 2026-01-27 15:01:10.979 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:01:10 compute-0 nova_compute[185191]: 2026-01-27 15:01:10.979 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:01:10 compute-0 nova_compute[185191]: 2026-01-27 15:01:10.979 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.980 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.980 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.981 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:01:10.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:01:11 compute-0 nova_compute[185191]: 2026-01-27 15:01:11.014 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:01:11 compute-0 nova_compute[185191]: 2026-01-27 15:01:11.014 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:01:11 compute-0 nova_compute[185191]: 2026-01-27 15:01:11.014 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:01:11 compute-0 nova_compute[185191]: 2026-01-27 15:01:11.014 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:01:11 compute-0 nova_compute[185191]: 2026-01-27 15:01:11.185 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:01:11 compute-0 nova_compute[185191]: 2026-01-27 15:01:11.186 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5910MB free_disk=72.64590072631836GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:01:11 compute-0 nova_compute[185191]: 2026-01-27 15:01:11.186 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:01:11 compute-0 nova_compute[185191]: 2026-01-27 15:01:11.186 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:01:11 compute-0 sudo[195274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egomorbinuepqqhnezenhjupssjgprym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526070.9942758-558-18230869276634/AnsiballZ_stat.py'
Jan 27 15:01:11 compute-0 sudo[195274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:11 compute-0 nova_compute[185191]: 2026-01-27 15:01:11.377 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:01:11 compute-0 nova_compute[185191]: 2026-01-27 15:01:11.377 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:01:11 compute-0 nova_compute[185191]: 2026-01-27 15:01:11.408 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:01:11 compute-0 python3.9[195276]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:01:11 compute-0 nova_compute[185191]: 2026-01-27 15:01:11.493 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:01:11 compute-0 nova_compute[185191]: 2026-01-27 15:01:11.495 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:01:11 compute-0 nova_compute[185191]: 2026-01-27 15:01:11.495 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:01:11 compute-0 sudo[195274]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:11 compute-0 sudo[195399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdzubqfjphlcbgwhriyomahphgufalbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526070.9942758-558-18230869276634/AnsiballZ_copy.py'
Jan 27 15:01:11 compute-0 sudo[195399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:12 compute-0 python3.9[195401]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769526070.9942758-558-18230869276634/.source.yaml _original_basename=.mzmjjha2 follow=False checksum=f5d0c4265d1879fa155262961140396406c1a091 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:12 compute-0 sudo[195399]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:12 compute-0 sudo[195551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cardcdcqqybpbsstbyvifpgggxpqpqcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526072.3347101-573-195069902626648/AnsiballZ_stat.py'
Jan 27 15:01:12 compute-0 sudo[195551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:12 compute-0 python3.9[195553]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:01:12 compute-0 sudo[195551]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:13 compute-0 sudo[195674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qujbvlppidbkrjuqhyhzkoobucvvradh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526072.3347101-573-195069902626648/AnsiballZ_copy.py'
Jan 27 15:01:13 compute-0 sudo[195674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:13 compute-0 python3.9[195676]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769526072.3347101-573-195069902626648/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:01:13 compute-0 sudo[195674]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:14 compute-0 sudo[195826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqehxphwhcgoqwkukjhkgpksszynerrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526073.9657352-594-140487749464381/AnsiballZ_file.py'
Jan 27 15:01:14 compute-0 sudo[195826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:14 compute-0 python3.9[195828]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:14 compute-0 sudo[195826]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:14 compute-0 sudo[195978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbdrlufrqlecfssqytymefwheiqmpoub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526074.608605-602-12106876631143/AnsiballZ_file.py'
Jan 27 15:01:14 compute-0 sudo[195978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:15 compute-0 python3.9[195980]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:01:15 compute-0 sudo[195978]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:15 compute-0 sudo[196130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmnlriiralaxowdgsnzfrbiiufjgyvvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526075.245948-610-90805602882901/AnsiballZ_stat.py'
Jan 27 15:01:15 compute-0 sudo[196130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:15 compute-0 python3.9[196132]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:01:15 compute-0 sudo[196130]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:15 compute-0 sudo[196208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebxbtfgrolwiobuclrgpvdzlahxypizp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526075.245948-610-90805602882901/AnsiballZ_file.py'
Jan 27 15:01:15 compute-0 sudo[196208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:16 compute-0 python3.9[196210]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.kijt5ssp recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:16 compute-0 sudo[196208]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:16 compute-0 python3.9[196360]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/node_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:18 compute-0 sudo[196781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyizathgnxlgovjszmjwwnmzktdhmgxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526078.4634042-647-75734873841512/AnsiballZ_container_config_data.py'
Jan 27 15:01:18 compute-0 sudo[196781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:19 compute-0 python3.9[196783]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/node_exporter config_pattern=*.json debug=False
Jan 27 15:01:19 compute-0 sudo[196781]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:19 compute-0 sudo[196933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtmzivqphhxwudjzyrbhwirsazgbpgty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526079.6540604-658-10525414354220/AnsiballZ_container_config_hash.py'
Jan 27 15:01:19 compute-0 sudo[196933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:20 compute-0 python3.9[196935]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 15:01:20 compute-0 sudo[196933]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:20 compute-0 sudo[197085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trcevkmdkctvsvxvencrkawwnmddnwzi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769526080.5714722-668-250389736946639/AnsiballZ_edpm_container_manage.py'
Jan 27 15:01:20 compute-0 sudo[197085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:21 compute-0 python3[197087]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/node_exporter config_id=node_exporter config_overrides={} config_patterns=*.json containers=['node_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 15:01:21 compute-0 podman[197123]: 2026-01-27 15:01:21.32351141 +0000 UTC m=+0.023533516 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 27 15:01:21 compute-0 podman[197123]: 2026-01-27 15:01:21.614708998 +0000 UTC m=+0.314731084 container create b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=node_exporter)
Jan 27 15:01:21 compute-0 python3[197087]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535 --healthcheck-command /openstack/healthcheck node_exporter --label config_id=node_exporter --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Jan 27 15:01:21 compute-0 sudo[197085]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:22 compute-0 sudo[197310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njxbalfqoryzdmratyciaaniqpmrebea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526081.9747314-676-74847807208837/AnsiballZ_stat.py'
Jan 27 15:01:22 compute-0 sudo[197310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:22 compute-0 python3.9[197312]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:01:22 compute-0 sudo[197310]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:22 compute-0 sudo[197464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmalxhhotgbcimspczifnerpfsarftcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526082.7002819-685-210417932877414/AnsiballZ_file.py'
Jan 27 15:01:22 compute-0 sudo[197464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:23 compute-0 python3.9[197466]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:23 compute-0 sudo[197464]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:23 compute-0 sudo[197540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjtonszzdimfqrvpixwnzityqttaoocr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526082.7002819-685-210417932877414/AnsiballZ_stat.py'
Jan 27 15:01:23 compute-0 sudo[197540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:23 compute-0 python3.9[197542]: ansible-stat Invoked with path=/etc/systemd/system/edpm_node_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:01:23 compute-0 sudo[197540]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:24 compute-0 sudo[197691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbxldxarbaxmpcafasoqkdeblfgovogq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526083.6876154-685-14974042189452/AnsiballZ_copy.py'
Jan 27 15:01:24 compute-0 sudo[197691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:24 compute-0 python3.9[197693]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769526083.6876154-685-14974042189452/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:24 compute-0 sudo[197691]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:24 compute-0 sudo[197767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsvcrhdusarfwdnegxphzeqterxiwehg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526083.6876154-685-14974042189452/AnsiballZ_systemd.py'
Jan 27 15:01:24 compute-0 sudo[197767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:25 compute-0 python3.9[197769]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 15:01:25 compute-0 systemd[1]: Reloading.
Jan 27 15:01:25 compute-0 systemd-rc-local-generator[197793]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:01:25 compute-0 systemd-sysv-generator[197797]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:01:25 compute-0 sudo[197767]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:26 compute-0 sudo[197879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmbfclhbapaluwxnkfvzvzltkgohmrwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526083.6876154-685-14974042189452/AnsiballZ_systemd.py'
Jan 27 15:01:26 compute-0 sudo[197879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:26 compute-0 python3.9[197881]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 15:01:26 compute-0 systemd[1]: Reloading.
Jan 27 15:01:26 compute-0 systemd-rc-local-generator[197911]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:01:26 compute-0 systemd-sysv-generator[197915]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:01:27 compute-0 systemd[1]: Starting node_exporter container...
Jan 27 15:01:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6c542b17d093cc35d80546df58415b3c744cc94c2bcb5794b0ed005b302ad3/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 15:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6c542b17d093cc35d80546df58415b3c744cc94c2bcb5794b0ed005b302ad3/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 15:01:28 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7.
Jan 27 15:01:28 compute-0 podman[197922]: 2026-01-27 15:01:28.116953856 +0000 UTC m=+0.325525447 container init b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.141Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.141Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.141Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.141Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.141Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.141Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.141Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=arp
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=bcache
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=bonding
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=cpu
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=edac
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=filefd
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=netclass
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=netdev
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=netstat
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=nfs
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=nvme
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=softnet
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=systemd
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=xfs
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.142Z caller=node_exporter.go:117 level=info collector=zfs
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.143Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Jan 27 15:01:28 compute-0 node_exporter[197937]: ts=2026-01-27T15:01:28.143Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Jan 27 15:01:28 compute-0 podman[197922]: 2026-01-27 15:01:28.168602662 +0000 UTC m=+0.377174233 container start b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:01:28 compute-0 podman[197922]: node_exporter
Jan 27 15:01:28 compute-0 systemd[1]: Started node_exporter container.
Jan 27 15:01:28 compute-0 podman[197946]: 2026-01-27 15:01:28.332262781 +0000 UTC m=+0.142518930 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:01:28 compute-0 sudo[197879]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:29 compute-0 python3.9[198118]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 15:01:29 compute-0 sudo[198268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-entpzmevfpszexsrctjgbjanvkwtwxbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526089.5794008-730-116077565984670/AnsiballZ_stat.py'
Jan 27 15:01:29 compute-0 sudo[198268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:30 compute-0 python3.9[198270]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:01:30 compute-0 sudo[198268]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:30 compute-0 sudo[198393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctvvolqsocptddpgprjecroketdfqvwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526089.5794008-730-116077565984670/AnsiballZ_copy.py'
Jan 27 15:01:30 compute-0 sudo[198393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:30 compute-0 python3.9[198395]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769526089.5794008-730-116077565984670/.source.yaml _original_basename=.jvueda4y follow=False checksum=5298966b1c4de3c6aea7e50e48e143cfb36a663c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:30 compute-0 sudo[198393]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:31 compute-0 sudo[198545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqszicctfbznmavpsigxeebutocmzhsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526090.9975333-745-91126222353382/AnsiballZ_stat.py'
Jan 27 15:01:31 compute-0 sudo[198545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:31 compute-0 python3.9[198547]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:01:31 compute-0 sudo[198545]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:31 compute-0 sudo[198668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdnpdhnnbzwgfzskceucruldtqawyogf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526090.9975333-745-91126222353382/AnsiballZ_copy.py'
Jan 27 15:01:31 compute-0 sudo[198668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:32 compute-0 python3.9[198670]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769526090.9975333-745-91126222353382/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:01:32 compute-0 sudo[198668]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:33 compute-0 sudo[198820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upjqxkuxnrvxxsblccdhtkoirmzuwndz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526092.7672322-766-163983208381965/AnsiballZ_file.py'
Jan 27 15:01:33 compute-0 sudo[198820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:33 compute-0 python3.9[198822]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:33 compute-0 sudo[198820]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:33 compute-0 sudo[198972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypqkuqirfstxqlvrthsmeljlzieizwqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526093.6273503-774-86085493404059/AnsiballZ_file.py'
Jan 27 15:01:33 compute-0 sudo[198972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:34 compute-0 python3.9[198974]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:01:34 compute-0 sudo[198972]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:34 compute-0 sudo[199124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnadnsirrwigcfsrdxntabemaxpblbzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526094.4769769-782-82916232202991/AnsiballZ_stat.py'
Jan 27 15:01:34 compute-0 sudo[199124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:35 compute-0 python3.9[199126]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:01:35 compute-0 sudo[199124]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:35 compute-0 sudo[199202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evkdghocbukxlntxuvydzxhdvligluzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526094.4769769-782-82916232202991/AnsiballZ_file.py'
Jan 27 15:01:35 compute-0 sudo[199202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:35 compute-0 python3.9[199204]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.im9bahpc recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:35 compute-0 sudo[199202]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:36 compute-0 python3.9[199354]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/podman_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:37 compute-0 podman[199528]: 2026-01-27 15:01:37.302880894 +0000 UTC m=+0.058250978 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 15:01:38 compute-0 sudo[199794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bswkstuuxurkjgtpsybprwmtpljacimq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526098.0956476-819-56962059436259/AnsiballZ_container_config_data.py'
Jan 27 15:01:38 compute-0 sudo[199794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:38 compute-0 python3.9[199796]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/podman_exporter config_pattern=*.json debug=False
Jan 27 15:01:38 compute-0 sudo[199794]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:39 compute-0 sudo[199946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poprdwzdtjblhwfobhnwehikydpxvxty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526099.007231-830-73654742346332/AnsiballZ_container_config_hash.py'
Jan 27 15:01:39 compute-0 sudo[199946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:39 compute-0 python3.9[199948]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 15:01:39 compute-0 sudo[199946]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:40 compute-0 podman[199996]: 2026-01-27 15:01:40.290673922 +0000 UTC m=+0.054746992 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, io.buildah.version=1.41.4)
Jan 27 15:01:40 compute-0 systemd[1]: 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088-6171f51e35808055.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 15:01:40 compute-0 systemd[1]: 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088-6171f51e35808055.service: Failed with result 'exit-code'.
Jan 27 15:01:40 compute-0 sudo[200127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzorwwlknakgozsnymsjlzbqlbbngjrl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769526100.2008257-840-153627406030833/AnsiballZ_edpm_container_manage.py'
Jan 27 15:01:40 compute-0 sudo[200127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:40 compute-0 podman[200091]: 2026-01-27 15:01:40.569484308 +0000 UTC m=+0.077576028 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 15:01:40 compute-0 python3[200136]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/podman_exporter config_id=podman_exporter config_overrides={} config_patterns=*.json containers=['podman_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 15:01:46 compute-0 podman[200154]: 2026-01-27 15:01:46.148837647 +0000 UTC m=+5.098848172 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 27 15:01:46 compute-0 podman[200250]: 2026-01-27 15:01:46.282240176 +0000 UTC m=+0.026472657 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 27 15:01:46 compute-0 podman[200250]: 2026-01-27 15:01:46.986362947 +0000 UTC m=+0.730595418 container create 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=podman_exporter, container_name=podman_exporter, managed_by=edpm_ansible)
Jan 27 15:01:46 compute-0 python3[200136]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env CONTAINER_HOST=unix:///run/podman/podman.sock --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535 --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=podman_exporter --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Jan 27 15:01:47 compute-0 sudo[200127]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:47 compute-0 sudo[200438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miwsnawhoqbkjlchuiovtgzdrqodoigh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526107.3741202-848-53545636090187/AnsiballZ_stat.py'
Jan 27 15:01:47 compute-0 sudo[200438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:47 compute-0 python3.9[200440]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:01:47 compute-0 sudo[200438]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:48 compute-0 sudo[200592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggmafybkhlrlhfqfqdkdvysnocyggojv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526108.39777-857-141652636888984/AnsiballZ_file.py'
Jan 27 15:01:48 compute-0 sudo[200592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:48 compute-0 python3.9[200594]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:48 compute-0 sudo[200592]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:49 compute-0 sudo[200668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcpeieetoyskkgnajarcpebdjzlbqhzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526108.39777-857-141652636888984/AnsiballZ_stat.py'
Jan 27 15:01:49 compute-0 sudo[200668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:49 compute-0 python3.9[200670]: ansible-stat Invoked with path=/etc/systemd/system/edpm_podman_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:01:49 compute-0 sudo[200668]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:49 compute-0 sudo[200819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sckelalcsdyftipgkfkywsnlkmzixfag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526109.406336-857-38887801701973/AnsiballZ_copy.py'
Jan 27 15:01:49 compute-0 sudo[200819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:50 compute-0 python3.9[200821]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769526109.406336-857-38887801701973/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:50 compute-0 sudo[200819]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:50 compute-0 sudo[200895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njjzitzljenqiyceuqktgrsbjnbfuxlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526109.406336-857-38887801701973/AnsiballZ_systemd.py'
Jan 27 15:01:50 compute-0 sudo[200895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:50 compute-0 python3.9[200897]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 15:01:50 compute-0 systemd[1]: Reloading.
Jan 27 15:01:50 compute-0 systemd-rc-local-generator[200923]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:01:50 compute-0 systemd-sysv-generator[200928]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:01:50 compute-0 sudo[200895]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:51 compute-0 sudo[201006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhdiuoszegfwsaatcqxvlhwtsqldkjkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526109.406336-857-38887801701973/AnsiballZ_systemd.py'
Jan 27 15:01:51 compute-0 sudo[201006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:51 compute-0 python3.9[201008]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 15:01:51 compute-0 systemd[1]: Reloading.
Jan 27 15:01:51 compute-0 systemd-rc-local-generator[201037]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:01:51 compute-0 systemd-sysv-generator[201041]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:01:51 compute-0 systemd[1]: Starting podman_exporter container...
Jan 27 15:01:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af93459b69e9accf9c26ec436fdd02f2828354403b5bc95bb658a55377bc4e11/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 15:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af93459b69e9accf9c26ec436fdd02f2828354403b5bc95bb658a55377bc4e11/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 15:01:52 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1.
Jan 27 15:01:52 compute-0 podman[201047]: 2026-01-27 15:01:52.274757197 +0000 UTC m=+0.406700464 container init 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:01:52 compute-0 podman_exporter[201062]: ts=2026-01-27T15:01:52.295Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Jan 27 15:01:52 compute-0 podman_exporter[201062]: ts=2026-01-27T15:01:52.295Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Jan 27 15:01:52 compute-0 podman_exporter[201062]: ts=2026-01-27T15:01:52.295Z caller=handler.go:94 level=info msg="enabled collectors"
Jan 27 15:01:52 compute-0 podman_exporter[201062]: ts=2026-01-27T15:01:52.295Z caller=handler.go:105 level=info collector=container
Jan 27 15:01:52 compute-0 podman[201047]: 2026-01-27 15:01:52.299219138 +0000 UTC m=+0.431162375 container start 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:01:52 compute-0 systemd[1]: Starting Podman API Service...
Jan 27 15:01:52 compute-0 systemd[1]: Started Podman API Service.
Jan 27 15:01:52 compute-0 podman[201047]: podman_exporter
Jan 27 15:01:52 compute-0 systemd[1]: Started podman_exporter container.
Jan 27 15:01:52 compute-0 podman[201073]: time="2026-01-27T15:01:52Z" level=info msg="/usr/bin/podman filtering at log level info"
Jan 27 15:01:52 compute-0 podman[201073]: time="2026-01-27T15:01:52Z" level=info msg="Setting parallel job count to 25"
Jan 27 15:01:52 compute-0 podman[201073]: time="2026-01-27T15:01:52Z" level=info msg="Using sqlite as database backend"
Jan 27 15:01:52 compute-0 podman[201073]: time="2026-01-27T15:01:52Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jan 27 15:01:52 compute-0 podman[201073]: time="2026-01-27T15:01:52Z" level=info msg="Using systemd socket activation to determine API endpoint"
Jan 27 15:01:52 compute-0 podman[201073]: time="2026-01-27T15:01:52Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Jan 27 15:01:52 compute-0 podman[201073]: @ - - [27/Jan/2026:15:01:52 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Jan 27 15:01:52 compute-0 podman[201073]: time="2026-01-27T15:01:52Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:01:52 compute-0 sudo[201006]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:52 compute-0 podman[201071]: 2026-01-27 15:01:52.362902125 +0000 UTC m=+0.050492276 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:01:52 compute-0 systemd[1]: 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1-3fdf0adcc7b5889e.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 15:01:52 compute-0 systemd[1]: 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1-3fdf0adcc7b5889e.service: Failed with result 'exit-code'.
Jan 27 15:01:52 compute-0 podman[201073]: @ - - [27/Jan/2026:15:01:52 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 18094 "" "Go-http-client/1.1"
Jan 27 15:01:52 compute-0 podman_exporter[201062]: ts=2026-01-27T15:01:52.370Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Jan 27 15:01:52 compute-0 podman_exporter[201062]: ts=2026-01-27T15:01:52.371Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Jan 27 15:01:52 compute-0 podman_exporter[201062]: ts=2026-01-27T15:01:52.371Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Jan 27 15:01:53 compute-0 python3.9[201260]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 15:01:53 compute-0 sudo[201410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejdwmiougttwkrhbxhdsfzsixfxlrroi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526113.706464-902-75540274961730/AnsiballZ_stat.py'
Jan 27 15:01:53 compute-0 sudo[201410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:54 compute-0 python3.9[201412]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:01:54 compute-0 sudo[201410]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:54 compute-0 sudo[201535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrqtormvxabjltcrabiaiswbreqtvxpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526113.706464-902-75540274961730/AnsiballZ_copy.py'
Jan 27 15:01:54 compute-0 sudo[201535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:54 compute-0 python3.9[201537]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769526113.706464-902-75540274961730/.source.yaml _original_basename=.agjk9moe follow=False checksum=1574c32eba701c146bb8a325d14bda4196f205e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:54 compute-0 sudo[201535]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:55 compute-0 sudo[201687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghmeyqizfvsrxyqoqiqdrnbfjztkopou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526114.8890436-917-163938212511341/AnsiballZ_stat.py'
Jan 27 15:01:55 compute-0 sudo[201687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:55 compute-0 python3.9[201689]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:01:55 compute-0 sudo[201687]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:55 compute-0 sudo[201810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcwbhtrasbzcipgyvduzzyylvfigmjml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526114.8890436-917-163938212511341/AnsiballZ_copy.py'
Jan 27 15:01:55 compute-0 sudo[201810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:55 compute-0 python3.9[201812]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769526114.8890436-917-163938212511341/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:01:55 compute-0 sudo[201810]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:56 compute-0 sudo[201962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twqabivzvrxgkfzzwcjgrmqzdvjgwvhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526116.5737422-938-162871780241747/AnsiballZ_file.py'
Jan 27 15:01:56 compute-0 sudo[201962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:57 compute-0 python3.9[201964]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:57 compute-0 sudo[201962]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:57 compute-0 sudo[202114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqstcryioeyjomsycktjkilmbmjrmluz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526117.489756-946-212156528443252/AnsiballZ_file.py'
Jan 27 15:01:57 compute-0 sudo[202114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:58 compute-0 python3.9[202116]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:01:58 compute-0 sudo[202114]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:58 compute-0 podman[202240]: 2026-01-27 15:01:58.531138314 +0000 UTC m=+0.056919012 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:01:58 compute-0 sudo[202281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcxwzkglpajqrfkbvikmbqlbthmlyarm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526118.2145412-954-170947431405584/AnsiballZ_stat.py'
Jan 27 15:01:58 compute-0 sudo[202281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:58 compute-0 python3.9[202290]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:01:58 compute-0 sudo[202281]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:59 compute-0 sudo[202367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvrqzkhcmrlcokrhcgaiegsultklxdgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526118.2145412-954-170947431405584/AnsiballZ_file.py'
Jan 27 15:01:59 compute-0 sudo[202367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:01:59 compute-0 python3.9[202369]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.18o0u45o recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:01:59 compute-0 sudo[202367]: pam_unix(sudo:session): session closed for user root
Jan 27 15:01:59 compute-0 auditd[701]: Audit daemon rotating log files
Jan 27 15:01:59 compute-0 python3.9[202519]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:02:00.211 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:02:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:02:00.212 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:02:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:02:00.212 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:02:01 compute-0 anacron[4406]: Job `cron.weekly' started
Jan 27 15:02:01 compute-0 anacron[4406]: Job `cron.weekly' terminated
Jan 27 15:02:01 compute-0 sudo[202942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maxcyzzjnwhylakystwpusxcjkrlargh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526121.5335078-991-269568271982931/AnsiballZ_container_config_data.py'
Jan 27 15:02:01 compute-0 sudo[202942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:01 compute-0 python3.9[202944]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_pattern=*.json debug=False
Jan 27 15:02:02 compute-0 sudo[202942]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:02 compute-0 sudo[203094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjdadibkxvndbufkffibifrxsixmhcyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526122.3273242-1002-124433386427471/AnsiballZ_container_config_hash.py'
Jan 27 15:02:02 compute-0 sudo[203094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:02 compute-0 python3.9[203096]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 15:02:02 compute-0 sudo[203094]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:03 compute-0 sudo[203246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnuxtlialygzllaprqylwilrqocscmmc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769526123.1263733-1012-207401920784000/AnsiballZ_edpm_container_manage.py'
Jan 27 15:02:03 compute-0 sudo[203246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:03 compute-0 python3[203248]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_id=openstack_network_exporter config_overrides={} config_patterns=*.json containers=['openstack_network_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 15:02:06 compute-0 podman[203261]: 2026-01-27 15:02:06.53117113 +0000 UTC m=+2.750935974 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 27 15:02:06 compute-0 podman[203358]: 2026-01-27 15:02:06.72549112 +0000 UTC m=+0.105680670 container create f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 15:02:06 compute-0 podman[203358]: 2026-01-27 15:02:06.642534674 +0000 UTC m=+0.022724244 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 27 15:02:06 compute-0 python3[203248]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535 --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=openstack_network_exporter --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 27 15:02:06 compute-0 sudo[203246]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:07 compute-0 sudo[203547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krjtdqmiwdfwbhplabmaxyozsgllrbcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526127.0036218-1020-142398296070574/AnsiballZ_stat.py'
Jan 27 15:02:07 compute-0 sudo[203547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:07 compute-0 python3.9[203549]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:02:07 compute-0 sudo[203547]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:08 compute-0 sudo[203716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzbhwvxxfzlnzsepeuoyjbhadklozysw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526127.7317443-1029-267761293943372/AnsiballZ_file.py'
Jan 27 15:02:08 compute-0 podman[203675]: 2026-01-27 15:02:08.058184797 +0000 UTC m=+0.061620741 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 15:02:08 compute-0 sudo[203716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:08 compute-0 python3.9[203722]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:08 compute-0 sudo[203716]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:08 compute-0 sudo[203796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cofngcytdfsbsndwkbwniijzjgogpher ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526127.7317443-1029-267761293943372/AnsiballZ_stat.py'
Jan 27 15:02:08 compute-0 sudo[203796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:08 compute-0 python3.9[203798]: ansible-stat Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:02:08 compute-0 sudo[203796]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:09 compute-0 sudo[203947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jefisusrazpzquzkfrxmuwtufttuxcnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526128.8034756-1029-161855624460550/AnsiballZ_copy.py'
Jan 27 15:02:09 compute-0 sudo[203947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:09 compute-0 python3.9[203949]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769526128.8034756-1029-161855624460550/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:09 compute-0 sudo[203947]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:09 compute-0 sudo[204023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xubtvdxankjolpwnpppglwfnuttxqdng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526128.8034756-1029-161855624460550/AnsiballZ_systemd.py'
Jan 27 15:02:09 compute-0 sudo[204023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:10 compute-0 python3.9[204025]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 15:02:10 compute-0 systemd[1]: Reloading.
Jan 27 15:02:10 compute-0 systemd-rc-local-generator[204056]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:02:10 compute-0 systemd-sysv-generator[204061]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:02:10 compute-0 sudo[204023]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:10 compute-0 podman[204063]: 2026-01-27 15:02:10.526633443 +0000 UTC m=+0.060228553 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=unhealthy, health_failing_streak=3, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 15:02:10 compute-0 systemd[1]: 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088-6171f51e35808055.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 15:02:10 compute-0 systemd[1]: 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088-6171f51e35808055.service: Failed with result 'exit-code'.
Jan 27 15:02:10 compute-0 sudo[204173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aauzdnjdwhgawomnuoidpxeltlwxpnco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526128.8034756-1029-161855624460550/AnsiballZ_systemd.py'
Jan 27 15:02:10 compute-0 sudo[204173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:10 compute-0 podman[204129]: 2026-01-27 15:02:10.804594906 +0000 UTC m=+0.092575330 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 15:02:11 compute-0 python3.9[204181]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 15:02:11 compute-0 systemd[1]: Reloading.
Jan 27 15:02:11 compute-0 systemd-sysv-generator[204217]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:02:11 compute-0 systemd-rc-local-generator[204214]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:02:11 compute-0 systemd[1]: Starting openstack_network_exporter container...
Jan 27 15:02:11 compute-0 nova_compute[185191]: 2026-01-27 15:02:11.488 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:02:11 compute-0 nova_compute[185191]: 2026-01-27 15:02:11.549 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:02:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:02:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f264cb8370686eae164460b59c2e898eb34530a5e34cef6d271477fdacfb60cb/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 27 15:02:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f264cb8370686eae164460b59c2e898eb34530a5e34cef6d271477fdacfb60cb/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 15:02:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f264cb8370686eae164460b59c2e898eb34530a5e34cef6d271477fdacfb60cb/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 15:02:11 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931.
Jan 27 15:02:11 compute-0 podman[204224]: 2026-01-27 15:02:11.865413088 +0000 UTC m=+0.395062845 container init f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, release=1755695350, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, container_name=openstack_network_exporter, vendor=Red Hat, Inc.)
Jan 27 15:02:11 compute-0 openstack_network_exporter[204239]: INFO    15:02:11 main.go:48: registering *bridge.Collector
Jan 27 15:02:11 compute-0 openstack_network_exporter[204239]: INFO    15:02:11 main.go:48: registering *coverage.Collector
Jan 27 15:02:11 compute-0 openstack_network_exporter[204239]: INFO    15:02:11 main.go:48: registering *datapath.Collector
Jan 27 15:02:11 compute-0 openstack_network_exporter[204239]: INFO    15:02:11 main.go:48: registering *iface.Collector
Jan 27 15:02:11 compute-0 openstack_network_exporter[204239]: INFO    15:02:11 main.go:48: registering *memory.Collector
Jan 27 15:02:11 compute-0 openstack_network_exporter[204239]: INFO    15:02:11 main.go:55: *ovnnorthd.Collector not registered, metric set not enabled
Jan 27 15:02:11 compute-0 openstack_network_exporter[204239]: INFO    15:02:11 main.go:48: registering *ovn.Collector
Jan 27 15:02:11 compute-0 openstack_network_exporter[204239]: INFO    15:02:11 main.go:55: *ovsdbserver.Collector not registered, metric set not enabled
Jan 27 15:02:11 compute-0 openstack_network_exporter[204239]: INFO    15:02:11 main.go:48: registering *pmd_perf.Collector
Jan 27 15:02:11 compute-0 openstack_network_exporter[204239]: INFO    15:02:11 main.go:48: registering *pmd_rxq.Collector
Jan 27 15:02:11 compute-0 openstack_network_exporter[204239]: INFO    15:02:11 main.go:48: registering *vswitch.Collector
Jan 27 15:02:11 compute-0 openstack_network_exporter[204239]: NOTICE  15:02:11 main.go:76: listening on https://:9105/metrics
Jan 27 15:02:11 compute-0 podman[204224]: 2026-01-27 15:02:11.891592046 +0000 UTC m=+0.421220653 container start f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 15:02:11 compute-0 nova_compute[185191]: 2026-01-27 15:02:11.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:02:11 compute-0 nova_compute[185191]: 2026-01-27 15:02:11.943 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:02:11 compute-0 nova_compute[185191]: 2026-01-27 15:02:11.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:02:11 compute-0 nova_compute[185191]: 2026-01-27 15:02:11.975 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:02:11 compute-0 nova_compute[185191]: 2026-01-27 15:02:11.975 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:02:11 compute-0 nova_compute[185191]: 2026-01-27 15:02:11.976 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:02:11 compute-0 nova_compute[185191]: 2026-01-27 15:02:11.977 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:02:11 compute-0 nova_compute[185191]: 2026-01-27 15:02:11.977 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:02:11 compute-0 nova_compute[185191]: 2026-01-27 15:02:11.977 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:02:12 compute-0 podman[204224]: openstack_network_exporter
Jan 27 15:02:12 compute-0 systemd[1]: Started openstack_network_exporter container.
Jan 27 15:02:12 compute-0 nova_compute[185191]: 2026-01-27 15:02:12.058 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:02:12 compute-0 nova_compute[185191]: 2026-01-27 15:02:12.059 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:02:12 compute-0 nova_compute[185191]: 2026-01-27 15:02:12.059 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:02:12 compute-0 nova_compute[185191]: 2026-01-27 15:02:12.059 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:02:12 compute-0 sudo[204173]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:12 compute-0 podman[204249]: 2026-01-27 15:02:12.101515903 +0000 UTC m=+0.199168803 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Jan 27 15:02:12 compute-0 nova_compute[185191]: 2026-01-27 15:02:12.223 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:02:12 compute-0 nova_compute[185191]: 2026-01-27 15:02:12.224 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5793MB free_disk=72.43709945678711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:02:12 compute-0 nova_compute[185191]: 2026-01-27 15:02:12.224 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:02:12 compute-0 nova_compute[185191]: 2026-01-27 15:02:12.225 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:02:12 compute-0 nova_compute[185191]: 2026-01-27 15:02:12.323 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:02:12 compute-0 nova_compute[185191]: 2026-01-27 15:02:12.323 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:02:12 compute-0 nova_compute[185191]: 2026-01-27 15:02:12.346 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:02:12 compute-0 nova_compute[185191]: 2026-01-27 15:02:12.374 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:02:12 compute-0 nova_compute[185191]: 2026-01-27 15:02:12.376 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:02:12 compute-0 nova_compute[185191]: 2026-01-27 15:02:12.376 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:02:12 compute-0 python3.9[204422]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 15:02:13 compute-0 nova_compute[185191]: 2026-01-27 15:02:13.344 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:02:13 compute-0 nova_compute[185191]: 2026-01-27 15:02:13.345 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:02:13 compute-0 nova_compute[185191]: 2026-01-27 15:02:13.345 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:02:13 compute-0 sudo[204572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjlkdotnulflmehnuvaisomfwedysohq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526133.3403542-1074-23473061558371/AnsiballZ_stat.py'
Jan 27 15:02:13 compute-0 sudo[204572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:13 compute-0 python3.9[204574]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:02:13 compute-0 sudo[204572]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:14 compute-0 sudo[204697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsxvsxqrieooixoddybougphajovvkhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526133.3403542-1074-23473061558371/AnsiballZ_copy.py'
Jan 27 15:02:14 compute-0 sudo[204697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:14 compute-0 python3.9[204699]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769526133.3403542-1074-23473061558371/.source.yaml _original_basename=.l52jc06f follow=False checksum=7fe3a43f3c34bdf4c76b36fb7604cd7055ecae5a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:14 compute-0 sudo[204697]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:14 compute-0 sudo[204849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpestulzqddtwnwbdoumhcryedgdztkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526134.6124253-1089-82253000214668/AnsiballZ_find.py'
Jan 27 15:02:14 compute-0 sudo[204849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:15 compute-0 python3.9[204851]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 15:02:15 compute-0 sudo[204849]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:16 compute-0 sudo[205001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uinwobrgfhljrhoqrwuvcjmvhugjinhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526135.6611564-1099-98116005454850/AnsiballZ_podman_container_info.py'
Jan 27 15:02:16 compute-0 sudo[205001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:16 compute-0 python3.9[205003]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 27 15:02:16 compute-0 sudo[205001]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:17 compute-0 sudo[205166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utpszduvdqoakluvehlapgiadptkjmwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526136.652076-1107-230593732077631/AnsiballZ_podman_container_exec.py'
Jan 27 15:02:17 compute-0 sudo[205166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:17 compute-0 python3.9[205168]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:02:17 compute-0 systemd[1]: Started libpod-conmon-e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014.scope.
Jan 27 15:02:17 compute-0 podman[205169]: 2026-01-27 15:02:17.51775086 +0000 UTC m=+0.133739019 container exec e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 27 15:02:17 compute-0 podman[205169]: 2026-01-27 15:02:17.552960005 +0000 UTC m=+0.168948144 container exec_died e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 15:02:17 compute-0 systemd[1]: libpod-conmon-e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014.scope: Deactivated successfully.
Jan 27 15:02:17 compute-0 sudo[205166]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:18 compute-0 sudo[205351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbjnowpwanzzdpgqlnbtifpmzmlrjppd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526137.7811651-1115-165921591611617/AnsiballZ_podman_container_exec.py'
Jan 27 15:02:18 compute-0 sudo[205351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:18 compute-0 python3.9[205353]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:02:18 compute-0 systemd[1]: Started libpod-conmon-e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014.scope.
Jan 27 15:02:18 compute-0 podman[205354]: 2026-01-27 15:02:18.34441106 +0000 UTC m=+0.083535722 container exec e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:02:18 compute-0 podman[205374]: 2026-01-27 15:02:18.410900943 +0000 UTC m=+0.052363517 container exec_died e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 15:02:18 compute-0 podman[205354]: 2026-01-27 15:02:18.442054377 +0000 UTC m=+0.181179039 container exec_died e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 15:02:18 compute-0 systemd[1]: libpod-conmon-e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014.scope: Deactivated successfully.
Jan 27 15:02:18 compute-0 sudo[205351]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:18 compute-0 sudo[205536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrtcxlamixegffscgufnmlfltxhfdmww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526138.63769-1123-116780837259277/AnsiballZ_file.py'
Jan 27 15:02:18 compute-0 sudo[205536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:19 compute-0 python3.9[205538]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:19 compute-0 sudo[205536]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:19 compute-0 sudo[205688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvjnvovvdwsqxwjvqvrgdhmdbacbsurp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526139.2753375-1132-199519395388242/AnsiballZ_podman_container_info.py'
Jan 27 15:02:19 compute-0 sudo[205688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:20 compute-0 python3.9[205690]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 27 15:02:20 compute-0 sudo[205688]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:20 compute-0 sudo[205853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrjxvxaqiuurmspejizugceirrrhstkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526140.4649591-1140-266452815389426/AnsiballZ_podman_container_exec.py'
Jan 27 15:02:20 compute-0 sudo[205853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:21 compute-0 python3.9[205855]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:02:21 compute-0 systemd[1]: Started libpod-conmon-ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d.scope.
Jan 27 15:02:21 compute-0 podman[205856]: 2026-01-27 15:02:21.387247988 +0000 UTC m=+0.305144778 container exec ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 27 15:02:21 compute-0 podman[205876]: 2026-01-27 15:02:21.485856722 +0000 UTC m=+0.086068401 container exec_died ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 27 15:02:21 compute-0 podman[205856]: 2026-01-27 15:02:21.581362501 +0000 UTC m=+0.499259261 container exec_died ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:02:21 compute-0 systemd[1]: libpod-conmon-ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d.scope: Deactivated successfully.
Jan 27 15:02:21 compute-0 sudo[205853]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:22 compute-0 sudo[206036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yquanvuhdroqnptawannnuzxitcfjevm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526141.850315-1148-23307244615240/AnsiballZ_podman_container_exec.py'
Jan 27 15:02:22 compute-0 sudo[206036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:22 compute-0 python3.9[206038]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:02:22 compute-0 systemd[1]: Started libpod-conmon-ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d.scope.
Jan 27 15:02:22 compute-0 podman[206039]: 2026-01-27 15:02:22.693717197 +0000 UTC m=+0.342724891 container exec ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 27 15:02:22 compute-0 podman[206056]: 2026-01-27 15:02:22.78792653 +0000 UTC m=+0.089743851 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:02:22 compute-0 podman[206060]: 2026-01-27 15:02:22.794599153 +0000 UTC m=+0.089172636 container exec_died ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 15:02:22 compute-0 podman[206039]: 2026-01-27 15:02:22.837068758 +0000 UTC m=+0.486076472 container exec_died ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 15:02:22 compute-0 systemd[1]: libpod-conmon-ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d.scope: Deactivated successfully.
Jan 27 15:02:22 compute-0 sudo[206036]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:23 compute-0 sudo[206242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwnvewddekhrplhrxpxwirwypnmdpfqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526143.0632875-1156-221283369015450/AnsiballZ_file.py'
Jan 27 15:02:23 compute-0 sudo[206242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:23 compute-0 python3.9[206244]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:23 compute-0 sudo[206242]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:24 compute-0 sudo[206394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jngtznyqcprekvqoyirizypqoxggmqne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526143.7600288-1165-183126735279227/AnsiballZ_podman_container_info.py'
Jan 27 15:02:24 compute-0 sudo[206394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:24 compute-0 python3.9[206396]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 27 15:02:24 compute-0 sudo[206394]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:25 compute-0 sudo[206558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kstgevdxyteurmzejsqsyqbrpwxrrjtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526144.683351-1173-193069760044900/AnsiballZ_podman_container_exec.py'
Jan 27 15:02:25 compute-0 sudo[206558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:25 compute-0 python3.9[206560]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:02:25 compute-0 systemd[1]: Started libpod-conmon-873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088.scope.
Jan 27 15:02:25 compute-0 podman[206561]: 2026-01-27 15:02:25.463634929 +0000 UTC m=+0.209139037 container exec 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Jan 27 15:02:25 compute-0 podman[206581]: 2026-01-27 15:02:25.548815245 +0000 UTC m=+0.070398132 container exec_died 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 15:02:25 compute-0 podman[206561]: 2026-01-27 15:02:25.677294519 +0000 UTC m=+0.422798647 container exec_died 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Jan 27 15:02:25 compute-0 systemd[1]: libpod-conmon-873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088.scope: Deactivated successfully.
Jan 27 15:02:26 compute-0 sudo[206558]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:26 compute-0 sudo[206743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-higqppigquygftknhpbkezvshvghogdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526146.2189457-1181-18026492355426/AnsiballZ_podman_container_exec.py'
Jan 27 15:02:26 compute-0 sudo[206743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:26 compute-0 python3.9[206745]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:02:27 compute-0 systemd[1]: Started libpod-conmon-873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088.scope.
Jan 27 15:02:27 compute-0 podman[206746]: 2026-01-27 15:02:27.782038776 +0000 UTC m=+0.973416893 container exec 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 15:02:28 compute-0 podman[206746]: 2026-01-27 15:02:28.064009297 +0000 UTC m=+1.255387414 container exec_died 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true)
Jan 27 15:02:28 compute-0 systemd[1]: libpod-conmon-873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088.scope: Deactivated successfully.
Jan 27 15:02:28 compute-0 sudo[206743]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:29 compute-0 sudo[206944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flbzzhhpkpdiylajubsswoymndwgoykw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526148.7656374-1189-268499081536229/AnsiballZ_file.py'
Jan 27 15:02:29 compute-0 sudo[206944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:29 compute-0 podman[206902]: 2026-01-27 15:02:29.115860569 +0000 UTC m=+0.072556430 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:02:29 compute-0 python3.9[206953]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:29 compute-0 sudo[206944]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:29 compute-0 sudo[207105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofimqzlkhaajnjtenqbfnbvskbljdpef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526149.6114671-1198-29882820395204/AnsiballZ_podman_container_info.py'
Jan 27 15:02:29 compute-0 sudo[207105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:30 compute-0 python3.9[207107]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 27 15:02:30 compute-0 sudo[207105]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:30 compute-0 sudo[207271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yecvrpukfhfwdwjlqypsirxmclkigxzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526150.4377277-1206-235228601887714/AnsiballZ_podman_container_exec.py'
Jan 27 15:02:30 compute-0 sudo[207271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:31 compute-0 python3.9[207273]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:02:31 compute-0 systemd[1]: Started libpod-conmon-b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7.scope.
Jan 27 15:02:31 compute-0 podman[207274]: 2026-01-27 15:02:31.328458896 +0000 UTC m=+0.266764808 container exec b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:02:31 compute-0 podman[207293]: 2026-01-27 15:02:31.445824617 +0000 UTC m=+0.105749865 container exec_died b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:02:31 compute-0 podman[207274]: 2026-01-27 15:02:31.55517259 +0000 UTC m=+0.493478502 container exec_died b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:02:31 compute-0 systemd[1]: libpod-conmon-b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7.scope: Deactivated successfully.
Jan 27 15:02:31 compute-0 sudo[207271]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:32 compute-0 sudo[207455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-badmrlwqsaclpucllqmavkqprmpzvgat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526151.8180952-1214-70235969093429/AnsiballZ_podman_container_exec.py'
Jan 27 15:02:32 compute-0 sudo[207455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:32 compute-0 python3.9[207457]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:02:32 compute-0 systemd[1]: Started libpod-conmon-b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7.scope.
Jan 27 15:02:32 compute-0 podman[207458]: 2026-01-27 15:02:32.59891222 +0000 UTC m=+0.314059666 container exec b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:02:32 compute-0 podman[207477]: 2026-01-27 15:02:32.716617911 +0000 UTC m=+0.098030005 container exec_died b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:02:32 compute-0 podman[207458]: 2026-01-27 15:02:32.880461211 +0000 UTC m=+0.595608667 container exec_died b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:02:32 compute-0 systemd[1]: libpod-conmon-b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7.scope: Deactivated successfully.
Jan 27 15:02:33 compute-0 sudo[207455]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:33 compute-0 sudo[207639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzlyaijdivvfqpterxmqsqfzveshmfhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526153.5632722-1222-214759988813275/AnsiballZ_file.py'
Jan 27 15:02:33 compute-0 sudo[207639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:34 compute-0 python3.9[207641]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:34 compute-0 sudo[207639]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:34 compute-0 sudo[207791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gitifmdrjrfvawfdeqjgicbpciinmdcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526154.4282932-1231-201449755947438/AnsiballZ_podman_container_info.py'
Jan 27 15:02:34 compute-0 sudo[207791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:34 compute-0 python3.9[207793]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 27 15:02:35 compute-0 sudo[207791]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:35 compute-0 sudo[207956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoddxubufshnceaypazrralazdryrrdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526155.2435563-1239-258889486627307/AnsiballZ_podman_container_exec.py'
Jan 27 15:02:35 compute-0 sudo[207956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:35 compute-0 python3.9[207958]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:02:36 compute-0 systemd[1]: Started libpod-conmon-34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1.scope.
Jan 27 15:02:36 compute-0 podman[207959]: 2026-01-27 15:02:36.220302104 +0000 UTC m=+0.454186449 container exec 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:02:36 compute-0 podman[207959]: 2026-01-27 15:02:36.360420616 +0000 UTC m=+0.594304951 container exec_died 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:02:36 compute-0 sudo[207956]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:36 compute-0 systemd[1]: libpod-conmon-34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1.scope: Deactivated successfully.
Jan 27 15:02:37 compute-0 sudo[208140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkxmswcjyctjxtoonxvqoiwsnnhetmfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526157.0210977-1247-110551304672947/AnsiballZ_podman_container_exec.py'
Jan 27 15:02:37 compute-0 sudo[208140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:37 compute-0 python3.9[208142]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:02:37 compute-0 systemd[1]: Started libpod-conmon-34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1.scope.
Jan 27 15:02:37 compute-0 podman[208143]: 2026-01-27 15:02:37.799537543 +0000 UTC m=+0.267613251 container exec 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:02:37 compute-0 podman[208143]: 2026-01-27 15:02:37.970059715 +0000 UTC m=+0.438135373 container exec_died 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:02:38 compute-0 systemd[1]: libpod-conmon-34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1.scope: Deactivated successfully.
Jan 27 15:02:38 compute-0 sudo[208140]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:38 compute-0 podman[208174]: 2026-01-27 15:02:38.340408167 +0000 UTC m=+0.091771315 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 27 15:02:38 compute-0 sudo[208341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrnmbqfxkholozsylarysehdrtvlteeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526158.5161736-1255-123594266389357/AnsiballZ_file.py'
Jan 27 15:02:38 compute-0 sudo[208341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:39 compute-0 python3.9[208343]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:39 compute-0 sudo[208341]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:39 compute-0 sudo[208493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpmfcoigfbqmrudgzbosbycurxdpcaed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526159.4108064-1264-107025413448824/AnsiballZ_podman_container_info.py'
Jan 27 15:02:39 compute-0 sudo[208493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:40 compute-0 python3.9[208495]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 27 15:02:40 compute-0 sudo[208493]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:40 compute-0 sudo[208668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjjblznphurifpgepplssvopcorghuqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526160.3712523-1272-79073791520157/AnsiballZ_podman_container_exec.py'
Jan 27 15:02:40 compute-0 sudo[208668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:40 compute-0 podman[208632]: 2026-01-27 15:02:40.760508352 +0000 UTC m=+0.112573541 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126)
Jan 27 15:02:40 compute-0 python3.9[208681]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:02:41 compute-0 systemd[1]: Started libpod-conmon-f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931.scope.
Jan 27 15:02:41 compute-0 podman[208683]: 2026-01-27 15:02:41.34624137 +0000 UTC m=+0.402036448 container exec f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_id=openstack_network_exporter, vendor=Red Hat, Inc.)
Jan 27 15:02:41 compute-0 podman[208696]: 2026-01-27 15:02:41.473613784 +0000 UTC m=+0.224058372 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 15:02:41 compute-0 podman[208683]: 2026-01-27 15:02:41.501463634 +0000 UTC m=+0.557258682 container exec_died f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.component=ubi9-minimal-container, release=1755695350, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter)
Jan 27 15:02:41 compute-0 systemd[1]: libpod-conmon-f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931.scope: Deactivated successfully.
Jan 27 15:02:41 compute-0 sudo[208668]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:42 compute-0 podman[208817]: 2026-01-27 15:02:42.32945312 +0000 UTC m=+0.083902940 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 15:02:42 compute-0 sudo[208910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvotekyplbuvdegtnsmrnrxzwkgpgdrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526162.1175084-1280-5062291274251/AnsiballZ_podman_container_exec.py'
Jan 27 15:02:42 compute-0 sudo[208910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:42 compute-0 python3.9[208912]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:02:42 compute-0 systemd[1]: Started libpod-conmon-f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931.scope.
Jan 27 15:02:43 compute-0 podman[208913]: 2026-01-27 15:02:43.011131055 +0000 UTC m=+0.292076798 container exec f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., config_id=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 15:02:43 compute-0 podman[208932]: 2026-01-27 15:02:43.081962697 +0000 UTC m=+0.057939351 container exec_died f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter)
Jan 27 15:02:43 compute-0 podman[208913]: 2026-01-27 15:02:43.11835276 +0000 UTC m=+0.399298413 container exec_died f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.33.7, version=9.6, vcs-type=git, com.redhat.component=ubi9-minimal-container, architecture=x86_64)
Jan 27 15:02:43 compute-0 systemd[1]: libpod-conmon-f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931.scope: Deactivated successfully.
Jan 27 15:02:43 compute-0 sudo[208910]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:43 compute-0 sudo[209095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaqkmrxgdgsguisezrsjrfpppadyezdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526163.6100173-1288-44648343077358/AnsiballZ_file.py'
Jan 27 15:02:43 compute-0 sudo[209095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:44 compute-0 python3.9[209097]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:44 compute-0 sudo[209095]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:44 compute-0 sudo[209247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oefcrikspalgzwdkphityeqayclavzxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526164.5734313-1297-183186915746273/AnsiballZ_file.py'
Jan 27 15:02:44 compute-0 sudo[209247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:45 compute-0 python3.9[209249]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:45 compute-0 sudo[209247]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:45 compute-0 sudo[209399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbzepskwnhjgfzfcfgilfjkvbcuvkgzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526165.389324-1305-119920033315181/AnsiballZ_stat.py'
Jan 27 15:02:45 compute-0 sudo[209399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:45 compute-0 python3.9[209401]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:02:45 compute-0 sudo[209399]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:46 compute-0 sudo[209522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttsbtviimspyhxyarfmuhutzukglldro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526165.389324-1305-119920033315181/AnsiballZ_copy.py'
Jan 27 15:02:46 compute-0 sudo[209522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:46 compute-0 python3.9[209524]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769526165.389324-1305-119920033315181/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:46 compute-0 sudo[209522]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:47 compute-0 sudo[209674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzezxhgndvzejvngnzcpfndcsnltcxev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526166.8631845-1321-227557018842763/AnsiballZ_file.py'
Jan 27 15:02:47 compute-0 sudo[209674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:47 compute-0 python3.9[209676]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:47 compute-0 sudo[209674]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:47 compute-0 sudo[209826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uexsfwkcchptqepyyharbwgukhktkzol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526167.611166-1329-83825129484505/AnsiballZ_stat.py'
Jan 27 15:02:47 compute-0 sudo[209826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:48 compute-0 python3.9[209828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:02:48 compute-0 sudo[209826]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:48 compute-0 sudo[209904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngeqvaporuqnsfiwbraacftjyxitayaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526167.611166-1329-83825129484505/AnsiballZ_file.py'
Jan 27 15:02:48 compute-0 sudo[209904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:48 compute-0 python3.9[209906]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:48 compute-0 sudo[209904]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:49 compute-0 sudo[210056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxpjfnqodvxkqcxmdfmkqegagowgkupo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526168.897216-1341-21921259448386/AnsiballZ_stat.py'
Jan 27 15:02:49 compute-0 sudo[210056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:49 compute-0 python3.9[210058]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:02:49 compute-0 sudo[210056]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:49 compute-0 sudo[210134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-denfdpxdybagcdmphvidlecbbanslpoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526168.897216-1341-21921259448386/AnsiballZ_file.py'
Jan 27 15:02:49 compute-0 sudo[210134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:49 compute-0 python3.9[210136]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8noez436 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:49 compute-0 sudo[210134]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:50 compute-0 sudo[210286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inaniahdsatcwvheaetjdlkshywdairm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526170.080608-1353-77951431361179/AnsiballZ_stat.py'
Jan 27 15:02:50 compute-0 sudo[210286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:50 compute-0 python3.9[210288]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:02:50 compute-0 sudo[210286]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:50 compute-0 sudo[210364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apgimogcwdmrmkkhzcydvqiuhuugczkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526170.080608-1353-77951431361179/AnsiballZ_file.py'
Jan 27 15:02:50 compute-0 sudo[210364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:51 compute-0 python3.9[210366]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:51 compute-0 sudo[210364]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:51 compute-0 sudo[210516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqdnyujtndabrtajwhnusqezfwulcezu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526171.2475855-1366-4531611570112/AnsiballZ_command.py'
Jan 27 15:02:51 compute-0 sudo[210516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:51 compute-0 python3.9[210518]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:02:51 compute-0 sudo[210516]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:52 compute-0 sudo[210669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tngmthrxehqehkedeeywqilgfxunxuxb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769526171.9240615-1374-206616263248562/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 15:02:52 compute-0 sudo[210669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:52 compute-0 python3[210671]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 15:02:52 compute-0 sudo[210669]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:53 compute-0 sudo[210834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-addxzlzbzduxvfyyaqcfgvclzeoiilcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526172.8950315-1382-130985348624733/AnsiballZ_stat.py'
Jan 27 15:02:53 compute-0 sudo[210834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:53 compute-0 podman[210795]: 2026-01-27 15:02:53.265239086 +0000 UTC m=+0.084179947 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:02:53 compute-0 python3.9[210843]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:02:53 compute-0 sudo[210834]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:53 compute-0 sudo[210923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxgwzvvrqhxfaamavpttxcoodbahzqdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526172.8950315-1382-130985348624733/AnsiballZ_file.py'
Jan 27 15:02:53 compute-0 sudo[210923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:53 compute-0 python3.9[210925]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:53 compute-0 sudo[210923]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:54 compute-0 sudo[211075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znwoqjvjazayhauqwiqyrqprtmjigurk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526174.056947-1394-165586709258121/AnsiballZ_stat.py'
Jan 27 15:02:54 compute-0 sudo[211075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:54 compute-0 python3.9[211077]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:02:54 compute-0 sudo[211075]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:54 compute-0 sudo[211153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krsfknawwwurhbdnpjdmdamdqajxladh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526174.056947-1394-165586709258121/AnsiballZ_file.py'
Jan 27 15:02:54 compute-0 sudo[211153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:55 compute-0 python3.9[211155]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:55 compute-0 sudo[211153]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:55 compute-0 sudo[211305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehsrtknjixglfrbllgwqqiwunugtuqsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526175.2631092-1406-228654789052906/AnsiballZ_stat.py'
Jan 27 15:02:55 compute-0 sudo[211305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:55 compute-0 python3.9[211307]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:02:55 compute-0 sudo[211305]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:55 compute-0 sudo[211383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvotcgaacmffhxewjayhovgmgskdfvpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526175.2631092-1406-228654789052906/AnsiballZ_file.py'
Jan 27 15:02:55 compute-0 sudo[211383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:56 compute-0 python3.9[211385]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:56 compute-0 sudo[211383]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:56 compute-0 sudo[211535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qstpwlkxmfiowzfvuqxyyoogoeqayffk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526176.3085399-1418-215121490559637/AnsiballZ_stat.py'
Jan 27 15:02:56 compute-0 sudo[211535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:56 compute-0 python3.9[211537]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:02:56 compute-0 sudo[211535]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:57 compute-0 sudo[211613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiflrpldtcjaftqnikjmxnditficcylq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526176.3085399-1418-215121490559637/AnsiballZ_file.py'
Jan 27 15:02:57 compute-0 sudo[211613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:57 compute-0 python3.9[211615]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:57 compute-0 sudo[211613]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:57 compute-0 sudo[211765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdytrbyzqkiysqbkujgkjaluengyprzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526177.4848645-1430-136584649536755/AnsiballZ_stat.py'
Jan 27 15:02:57 compute-0 sudo[211765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:58 compute-0 python3.9[211767]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:02:58 compute-0 sudo[211765]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:58 compute-0 sudo[211890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgobdktgruijwbpywjifouxjcqkwczel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526177.4848645-1430-136584649536755/AnsiballZ_copy.py'
Jan 27 15:02:58 compute-0 sudo[211890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:58 compute-0 python3.9[211892]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769526177.4848645-1430-136584649536755/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:58 compute-0 sudo[211890]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:59 compute-0 sudo[212042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztalhucilxwpzefiqzbzeeesnzjsvepl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526178.881615-1445-135504632503420/AnsiballZ_file.py'
Jan 27 15:02:59 compute-0 sudo[212042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:59 compute-0 podman[212044]: 2026-01-27 15:02:59.25153381 +0000 UTC m=+0.055947367 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:02:59 compute-0 python3.9[212045]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:02:59 compute-0 sudo[212042]: pam_unix(sudo:session): session closed for user root
Jan 27 15:02:59 compute-0 sudo[212217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvetnqyphzofpmutpmhuetvehntpwdfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526179.5452719-1453-68617271046418/AnsiballZ_command.py'
Jan 27 15:02:59 compute-0 sudo[212217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:02:59 compute-0 python3.9[212219]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:03:00 compute-0 sudo[212217]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:03:00.212 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:03:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:03:00.213 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:03:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:03:00.213 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:03:00 compute-0 sudo[212372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rviguahedhftjrvfsrzgbralxgrnhgnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526180.2079282-1461-152458981765830/AnsiballZ_blockinfile.py'
Jan 27 15:03:00 compute-0 sudo[212372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:00 compute-0 python3.9[212374]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:03:00 compute-0 sudo[212372]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:01 compute-0 sudo[212524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seixbheebxlqqvizooieqyvxveqdfcns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526181.0703535-1470-46760654246085/AnsiballZ_command.py'
Jan 27 15:03:01 compute-0 sudo[212524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:01 compute-0 python3.9[212526]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:03:01 compute-0 sudo[212524]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:02 compute-0 sudo[212677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orppkwmdwbpdvipygrhapivoxvzyegxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526181.9525738-1478-193653795237701/AnsiballZ_stat.py'
Jan 27 15:03:02 compute-0 sudo[212677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:02 compute-0 python3.9[212679]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:03:02 compute-0 sudo[212677]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:02 compute-0 sudo[212831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nljputxomscmmzsroiotdgsyrqgmkrzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526182.6182442-1486-125593643145636/AnsiballZ_command.py'
Jan 27 15:03:02 compute-0 sudo[212831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:03 compute-0 python3.9[212833]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:03:03 compute-0 sudo[212831]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:03 compute-0 sudo[212986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdeafkshscdhiwxaduyxtcsaaaxdzqmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526183.3668246-1494-27404703002421/AnsiballZ_file.py'
Jan 27 15:03:03 compute-0 sudo[212986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:03 compute-0 python3.9[212988]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:03:03 compute-0 sudo[212986]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:04 compute-0 podman[201073]: time="2026-01-27T15:03:04Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:03:04 compute-0 podman[201073]: @ - - [27/Jan/2026:15:03:04 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21256 "" "Go-http-client/1.1"
Jan 27 15:03:04 compute-0 podman[201073]: @ - - [27/Jan/2026:15:03:04 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2989 "" "Go-http-client/1.1"
Jan 27 15:03:04 compute-0 sshd-session[185543]: Connection closed by 192.168.122.30 port 48228
Jan 27 15:03:04 compute-0 sshd-session[185540]: pam_unix(sshd:session): session closed for user zuul
Jan 27 15:03:04 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Jan 27 15:03:04 compute-0 systemd[1]: session-26.scope: Consumed 1min 49.955s CPU time.
Jan 27 15:03:04 compute-0 systemd-logind[820]: Session 26 logged out. Waiting for processes to exit.
Jan 27 15:03:04 compute-0 systemd-logind[820]: Removed session 26.
Jan 27 15:03:04 compute-0 openstack_network_exporter[204239]: ERROR   15:03:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:03:04 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:03:04 compute-0 openstack_network_exporter[204239]: ERROR   15:03:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:03:04 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:03:09 compute-0 podman[213018]: 2026-01-27 15:03:09.302311175 +0000 UTC m=+0.061081377 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 15:03:10 compute-0 sshd-session[213038]: Accepted publickey for zuul from 192.168.122.30 port 53586 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 15:03:10 compute-0 systemd-logind[820]: New session 27 of user zuul.
Jan 27 15:03:10 compute-0 systemd[1]: Started Session 27 of User zuul.
Jan 27 15:03:10 compute-0 sshd-session[213038]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.980 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.981 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d85b1d00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:03:10.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:03:11 compute-0 podman[213142]: 2026-01-27 15:03:11.31359537 +0000 UTC m=+0.068783117 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:03:11 compute-0 sudo[213212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywdmpzlbadiunqbelputmynakzfjvgel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526190.6156533-19-106425389223977/AnsiballZ_systemd_service.py'
Jan 27 15:03:11 compute-0 sudo[213212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:11 compute-0 python3.9[213214]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 15:03:11 compute-0 systemd[1]: Reloading.
Jan 27 15:03:11 compute-0 systemd-sysv-generator[213256]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:03:11 compute-0 systemd-rc-local-generator[213252]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:03:11 compute-0 nova_compute[185191]: 2026-01-27 15:03:11.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:03:11 compute-0 nova_compute[185191]: 2026-01-27 15:03:11.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:03:12 compute-0 podman[213216]: 2026-01-27 15:03:12.0006079 +0000 UTC m=+0.133333979 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 15:03:12 compute-0 sudo[213212]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:12 compute-0 podman[213398]: 2026-01-27 15:03:12.876588234 +0000 UTC m=+0.071724907 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, version=9.6, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-type=git, maintainer=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, architecture=x86_64)
Jan 27 15:03:12 compute-0 nova_compute[185191]: 2026-01-27 15:03:12.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:03:13 compute-0 python3.9[213437]: ansible-ansible.builtin.service_facts Invoked
Jan 27 15:03:13 compute-0 network[213463]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 15:03:13 compute-0 network[213464]: 'network-scripts' will be removed from distribution in near future.
Jan 27 15:03:13 compute-0 network[213465]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 15:03:13 compute-0 nova_compute[185191]: 2026-01-27 15:03:13.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:03:13 compute-0 nova_compute[185191]: 2026-01-27 15:03:13.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:03:13 compute-0 nova_compute[185191]: 2026-01-27 15:03:13.943 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:03:13 compute-0 nova_compute[185191]: 2026-01-27 15:03:13.943 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:03:13 compute-0 nova_compute[185191]: 2026-01-27 15:03:13.967 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:03:13 compute-0 nova_compute[185191]: 2026-01-27 15:03:13.967 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:03:13 compute-0 nova_compute[185191]: 2026-01-27 15:03:13.967 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:03:13 compute-0 nova_compute[185191]: 2026-01-27 15:03:13.968 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:03:13 compute-0 nova_compute[185191]: 2026-01-27 15:03:13.990 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:03:13 compute-0 nova_compute[185191]: 2026-01-27 15:03:13.991 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:03:13 compute-0 nova_compute[185191]: 2026-01-27 15:03:13.991 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:03:13 compute-0 nova_compute[185191]: 2026-01-27 15:03:13.991 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:03:14 compute-0 nova_compute[185191]: 2026-01-27 15:03:14.119 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:03:14 compute-0 nova_compute[185191]: 2026-01-27 15:03:14.120 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5856MB free_disk=72.4793815612793GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:03:14 compute-0 nova_compute[185191]: 2026-01-27 15:03:14.121 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:03:14 compute-0 nova_compute[185191]: 2026-01-27 15:03:14.121 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:03:14 compute-0 nova_compute[185191]: 2026-01-27 15:03:14.192 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:03:14 compute-0 nova_compute[185191]: 2026-01-27 15:03:14.192 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:03:14 compute-0 nova_compute[185191]: 2026-01-27 15:03:14.217 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:03:14 compute-0 nova_compute[185191]: 2026-01-27 15:03:14.233 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:03:14 compute-0 nova_compute[185191]: 2026-01-27 15:03:14.235 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:03:14 compute-0 nova_compute[185191]: 2026-01-27 15:03:14.235 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:03:15 compute-0 nova_compute[185191]: 2026-01-27 15:03:15.211 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:03:15 compute-0 nova_compute[185191]: 2026-01-27 15:03:15.213 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:03:17 compute-0 sudo[213734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guqhelyxlwltevofzpvebuesaybcedbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526197.411854-42-99524548131883/AnsiballZ_systemd_service.py'
Jan 27 15:03:17 compute-0 sudo[213734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:18 compute-0 python3.9[213736]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 15:03:18 compute-0 sudo[213734]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:18 compute-0 sudo[213887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qukxxgsssozycdrcwtjpmkqprbncvapv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526198.3002908-52-209812094425051/AnsiballZ_file.py'
Jan 27 15:03:18 compute-0 sudo[213887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:18 compute-0 python3.9[213889]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:03:18 compute-0 sudo[213887]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:19 compute-0 sudo[214039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uumhbsqscevaqrqrjwouwpndjrqhgqix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526199.1252577-60-221036505944765/AnsiballZ_file.py'
Jan 27 15:03:19 compute-0 sudo[214039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:19 compute-0 python3.9[214041]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:03:19 compute-0 sudo[214039]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:20 compute-0 sudo[214191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbpnhgtecckapelusgilzsgtseptqwjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526199.8421097-69-39386264666288/AnsiballZ_command.py'
Jan 27 15:03:20 compute-0 sudo[214191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:20 compute-0 python3.9[214193]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:03:20 compute-0 sudo[214191]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:21 compute-0 python3.9[214345]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 15:03:22 compute-0 sudo[214495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsejhvkuqailkehbcqmiygzghxkwbtlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526201.7705722-87-87921626881312/AnsiballZ_systemd_service.py'
Jan 27 15:03:22 compute-0 sudo[214495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:22 compute-0 python3.9[214497]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 15:03:22 compute-0 systemd[1]: Reloading.
Jan 27 15:03:22 compute-0 systemd-rc-local-generator[214520]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:03:22 compute-0 systemd-sysv-generator[214525]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:03:22 compute-0 sudo[214495]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:23 compute-0 sudo[214683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxeynundplyehgitxcfpltrgfvblltmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526202.90304-95-92085850591087/AnsiballZ_command.py'
Jan 27 15:03:23 compute-0 sudo[214683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:23 compute-0 python3.9[214685]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:03:23 compute-0 sudo[214683]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:23 compute-0 podman[214687]: 2026-01-27 15:03:23.57189792 +0000 UTC m=+0.108284345 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:03:24 compute-0 sudo[214862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpbygnvkthatwqocezgemsnpdeoekdxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526203.722127-104-92011675548348/AnsiballZ_file.py'
Jan 27 15:03:24 compute-0 sudo[214862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:24 compute-0 python3.9[214864]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:03:24 compute-0 sudo[214862]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:25 compute-0 python3.9[215014]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:03:26 compute-0 python3.9[215166]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:03:27 compute-0 python3.9[215287]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769526205.649754-120-53714551124717/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:03:27 compute-0 python3.9[215437]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:03:28 compute-0 python3.9[215558]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769526207.304468-135-198575492916569/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:03:29 compute-0 sudo[215708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxwgunaolrjhsazzohsfgannuaevqjsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526208.573365-153-88928368525672/AnsiballZ_getent.py'
Jan 27 15:03:29 compute-0 sudo[215708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:29 compute-0 python3.9[215710]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 27 15:03:29 compute-0 sudo[215708]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:29 compute-0 podman[201073]: time="2026-01-27T15:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:03:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21256 "" "Go-http-client/1.1"
Jan 27 15:03:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3000 "" "Go-http-client/1.1"
Jan 27 15:03:30 compute-0 podman[215812]: 2026-01-27 15:03:30.311581015 +0000 UTC m=+0.057450758 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:03:30 compute-0 python3.9[215886]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:03:31 compute-0 python3.9[216007]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769526210.0912561-181-230583949956107/.source.conf _original_basename=ceilometer.conf follow=False checksum=06bb8599d9c8a601385c703338dd9ca518a4891f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:03:31 compute-0 openstack_network_exporter[204239]: ERROR   15:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:03:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:03:31 compute-0 openstack_network_exporter[204239]: ERROR   15:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:03:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:03:31 compute-0 python3.9[216158]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:03:32 compute-0 python3.9[216279]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769526211.3073585-181-108922300571283/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:03:32 compute-0 python3.9[216429]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:03:33 compute-0 python3.9[216550]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769526212.4573414-181-117327863700212/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:03:34 compute-0 python3.9[216700]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:03:34 compute-0 python3.9[216852]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:03:35 compute-0 python3.9[217004]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:03:35 compute-0 python3.9[217125]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769526214.9770954-240-22972047440095/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:03:36 compute-0 sudo[217275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epmgszrnqwmcprkhgxqnqafoygnlcixs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526216.1887243-255-267222038391814/AnsiballZ_file.py'
Jan 27 15:03:36 compute-0 sudo[217275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:36 compute-0 python3.9[217277]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:03:36 compute-0 sudo[217275]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:37 compute-0 sudo[217427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blnqcdgffaxtetwzkjdykznbktyfczce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526216.8431172-263-130542765710964/AnsiballZ_file.py'
Jan 27 15:03:37 compute-0 sudo[217427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:37 compute-0 python3.9[217429]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:03:37 compute-0 sudo[217427]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:37 compute-0 sudo[217579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iohcighvcudqbvcacofmnbkshwcmfcuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526217.4896536-271-56643418053909/AnsiballZ_file.py'
Jan 27 15:03:37 compute-0 sudo[217579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:37 compute-0 python3.9[217581]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:03:37 compute-0 sudo[217579]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:38 compute-0 sudo[217731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqsotfjrmuzwzldpjlzrrtbqzssvxywq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526218.1415386-279-166096538289853/AnsiballZ_stat.py'
Jan 27 15:03:38 compute-0 sudo[217731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:38 compute-0 python3.9[217733]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:03:38 compute-0 sudo[217731]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:39 compute-0 sudo[217854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtvmniyewyxtbyrxizsrdgbsqsmwhcqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526218.1415386-279-166096538289853/AnsiballZ_copy.py'
Jan 27 15:03:39 compute-0 sudo[217854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:39 compute-0 python3.9[217856]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769526218.1415386-279-166096538289853/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:03:39 compute-0 sudo[217854]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:39 compute-0 podman[217904]: 2026-01-27 15:03:39.549501952 +0000 UTC m=+0.056359131 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 15:03:39 compute-0 sudo[217945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsjdllpkskssyfvmfocoylfnyylabycq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526218.1415386-279-166096538289853/AnsiballZ_stat.py'
Jan 27 15:03:39 compute-0 sudo[217945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:39 compute-0 python3.9[217949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:03:39 compute-0 sudo[217945]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:40 compute-0 sudo[218070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycayenqqvmyexwjmpcnuzeatvlubgudg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526218.1415386-279-166096538289853/AnsiballZ_copy.py'
Jan 27 15:03:40 compute-0 sudo[218070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:40 compute-0 python3.9[218072]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769526218.1415386-279-166096538289853/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:03:40 compute-0 sudo[218070]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:40 compute-0 sudo[218222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maeibuxdmkwwqaztssttjzknrposueqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526220.4376872-279-140641749652604/AnsiballZ_stat.py'
Jan 27 15:03:40 compute-0 sudo[218222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:40 compute-0 python3.9[218224]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:03:40 compute-0 sudo[218222]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:41 compute-0 sudo[218345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noffyruezzfgbckdvvlfogpnknxorcla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526220.4376872-279-140641749652604/AnsiballZ_copy.py'
Jan 27 15:03:41 compute-0 sudo[218345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:41 compute-0 python3.9[218347]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769526220.4376872-279-140641749652604/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:03:41 compute-0 sudo[218345]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:41 compute-0 podman[218348]: 2026-01-27 15:03:41.545969392 +0000 UTC m=+0.063863093 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20260126)
Jan 27 15:03:42 compute-0 sudo[218518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekzwdyftlwobclksjupbvhpkvryxykyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526221.827326-321-190223227251417/AnsiballZ_file.py'
Jan 27 15:03:42 compute-0 sudo[218518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:42 compute-0 python3.9[218520]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:03:42 compute-0 podman[218521]: 2026-01-27 15:03:42.322798512 +0000 UTC m=+0.081609102 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 15:03:42 compute-0 sudo[218518]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:42 compute-0 sudo[218697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrjapurtfmqvuvtbscjqxpqjustydhxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526222.5607266-329-72036914114294/AnsiballZ_file.py'
Jan 27 15:03:42 compute-0 sudo[218697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:43 compute-0 python3.9[218699]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:03:43 compute-0 sudo[218697]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:43 compute-0 podman[218726]: 2026-01-27 15:03:43.306945143 +0000 UTC m=+0.064023358 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, version=9.6, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vendor=Red Hat, Inc.)
Jan 27 15:03:43 compute-0 sudo[218871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcvqyvpboonleqsjvrgzvlbmyxequojv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526223.2566617-337-17913595043179/AnsiballZ_stat.py'
Jan 27 15:03:43 compute-0 sudo[218871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:43 compute-0 python3.9[218873]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:03:43 compute-0 sudo[218871]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:44 compute-0 sudo[218994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-audnnrgmfdigwlxbkydpgmtrrmqwxsbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526223.2566617-337-17913595043179/AnsiballZ_copy.py'
Jan 27 15:03:44 compute-0 sudo[218994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:44 compute-0 python3.9[218996]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769526223.2566617-337-17913595043179/.source.json _original_basename=.vyjx5pu0 follow=False checksum=fa47598aea39469905a43b7b570ec2fd120965fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:03:44 compute-0 sudo[218994]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:45 compute-0 python3.9[219146]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:03:47 compute-0 sudo[219567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsjmuwvhhlonsveihgqhxzvlzsuvptzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526227.072-377-41098109976373/AnsiballZ_container_config_data.py'
Jan 27 15:03:47 compute-0 sudo[219567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:47 compute-0 python3.9[219569]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_pattern=*.json debug=False
Jan 27 15:03:47 compute-0 sudo[219567]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:48 compute-0 sudo[219719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypyivzhulibbdqaqsnkpnuuvyodnktck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526228.1443741-388-196812201691650/AnsiballZ_container_config_hash.py'
Jan 27 15:03:48 compute-0 sudo[219719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:48 compute-0 python3.9[219721]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 15:03:49 compute-0 sudo[219719]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:50 compute-0 sudo[219871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjfzfdzkbhmzoyhhpwfgaksbyvgdyybo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769526229.522764-398-16683618704298/AnsiballZ_edpm_container_manage.py'
Jan 27 15:03:50 compute-0 sudo[219871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:50 compute-0 python3[219873]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_id=ceilometer_agent_ipmi config_overrides={} config_patterns=*.json containers=['ceilometer_agent_ipmi'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 15:03:50 compute-0 podman[219910]: 2026-01-27 15:03:50.550075955 +0000 UTC m=+0.027898533 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 27 15:03:51 compute-0 podman[219910]: 2026-01-27 15:03:51.224288428 +0000 UTC m=+0.702111026 container create 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 15:03:51 compute-0 python3[219873]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49 --healthcheck-command /openstack/healthcheck ipmi --label config_id=ceilometer_agent_ipmi --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Jan 27 15:03:51 compute-0 sudo[219871]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:51 compute-0 sudo[220098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htxuwlmagirydltazspumcmikvlfjdci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526231.571754-406-35845965150677/AnsiballZ_stat.py'
Jan 27 15:03:51 compute-0 sudo[220098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:52 compute-0 python3.9[220100]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:03:52 compute-0 sudo[220098]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:52 compute-0 sudo[220252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rakanolyoknthkixgctiomifjkmeovnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526232.3655705-415-73648083852578/AnsiballZ_file.py'
Jan 27 15:03:52 compute-0 sudo[220252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:52 compute-0 python3.9[220254]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:03:52 compute-0 sudo[220252]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:53 compute-0 sudo[220328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tctaetpugwkjcnkudfvqlrcxffjbslgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526232.3655705-415-73648083852578/AnsiballZ_stat.py'
Jan 27 15:03:53 compute-0 sudo[220328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:53 compute-0 python3.9[220330]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:03:53 compute-0 sudo[220328]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:54 compute-0 sudo[220491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfwuanxapvyrqfyjnjqrirqjqetbpvot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526233.5594182-415-268987505482538/AnsiballZ_copy.py'
Jan 27 15:03:54 compute-0 sudo[220491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:54 compute-0 podman[220453]: 2026-01-27 15:03:54.070766321 +0000 UTC m=+0.080602184 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:03:54 compute-0 python3.9[220493]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769526233.5594182-415-268987505482538/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:03:54 compute-0 sudo[220491]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:55 compute-0 sudo[220578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyctjsqxbqagsndsbsqvfylpjysrwcgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526233.5594182-415-268987505482538/AnsiballZ_systemd.py'
Jan 27 15:03:55 compute-0 sudo[220578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:55 compute-0 python3.9[220580]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 15:03:55 compute-0 systemd[1]: Reloading.
Jan 27 15:03:55 compute-0 systemd-rc-local-generator[220611]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:03:55 compute-0 systemd-sysv-generator[220614]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:03:55 compute-0 sudo[220578]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:55 compute-0 sudo[220689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osynjxujnzfyxigqjrohhkfvkrepdmti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526233.5594182-415-268987505482538/AnsiballZ_systemd.py'
Jan 27 15:03:55 compute-0 sudo[220689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:56 compute-0 python3.9[220691]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 15:03:56 compute-0 systemd[1]: Reloading.
Jan 27 15:03:56 compute-0 systemd-rc-local-generator[220721]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:03:56 compute-0 systemd-sysv-generator[220724]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:03:56 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 27 15:03:56 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87fca9d1cc4f9ae62b758bd5735596c9879d16f9bd08cb7058e2f7a317fe7842/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 15:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87fca9d1cc4f9ae62b758bd5735596c9879d16f9bd08cb7058e2f7a317fe7842/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 15:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87fca9d1cc4f9ae62b758bd5735596c9879d16f9bd08cb7058e2f7a317fe7842/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 27 15:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87fca9d1cc4f9ae62b758bd5735596c9879d16f9bd08cb7058e2f7a317fe7842/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 27 15:03:57 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4.
Jan 27 15:03:57 compute-0 podman[220731]: 2026-01-27 15:03:57.13915985 +0000 UTC m=+0.353431872 container init 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, tcib_managed=true)
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: + sudo -E kolla_set_configs
Jan 27 15:03:57 compute-0 sudo[220754]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 27 15:03:57 compute-0 sudo[220754]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 15:03:57 compute-0 podman[220731]: 2026-01-27 15:03:57.172212731 +0000 UTC m=+0.386484763 container start 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Jan 27 15:03:57 compute-0 sudo[220754]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: INFO:__main__:Validating config file
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: INFO:__main__:Copying service configuration files
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: INFO:__main__:Writing out command to execute
Jan 27 15:03:57 compute-0 sudo[220754]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: ++ cat /run_command
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: + ARGS=
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: + sudo kolla_copy_cacerts
Jan 27 15:03:57 compute-0 podman[220731]: ceilometer_agent_ipmi
Jan 27 15:03:57 compute-0 sudo[220768]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 27 15:03:57 compute-0 sudo[220768]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 15:03:57 compute-0 sudo[220768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 15:03:57 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 27 15:03:57 compute-0 sudo[220768]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: + [[ ! -n '' ]]
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: + . kolla_extend_start
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: + umask 0022
Jan 27 15:03:57 compute-0 ceilometer_agent_ipmi[220747]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 27 15:03:57 compute-0 sudo[220689]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:57 compute-0 podman[220753]: 2026-01-27 15:03:57.306681548 +0000 UTC m=+0.119604987 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:03:57 compute-0 systemd[1]: 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4-16520f18fdbcee37.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 15:03:57 compute-0 systemd[1]: 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4-16520f18fdbcee37.service: Failed with result 'exit-code'.
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.069 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.069 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.069 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.069 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.070 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.070 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.070 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.070 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.070 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.070 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.070 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.070 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.070 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.070 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.070 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.071 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.071 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.071 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.071 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.071 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.071 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.071 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.071 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.071 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.071 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.071 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.071 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.071 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.072 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.072 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.072 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.072 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.072 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.072 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.072 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.072 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.072 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.072 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.072 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.072 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.073 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.073 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.073 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.073 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.073 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.073 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.073 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.073 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.073 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.073 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.073 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.073 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.074 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.074 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.074 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.074 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.074 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.074 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.074 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.074 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.074 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.074 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.074 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.074 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.074 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.075 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.075 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.075 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.075 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.075 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.075 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.075 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.075 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.075 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.075 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.075 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.076 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.076 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.076 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.076 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.076 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.076 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.076 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.076 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.076 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.076 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.076 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.076 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.076 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.077 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.077 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.077 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.077 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.077 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.077 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.077 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.077 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.077 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.077 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.077 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.077 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.078 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.078 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.078 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.078 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.078 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.078 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.078 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.078 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.078 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.078 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.078 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.079 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.079 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.079 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.079 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.079 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.079 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.079 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.079 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.079 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.079 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.080 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.080 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.080 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.080 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.080 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.080 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.080 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.080 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.080 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.080 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.080 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.081 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.081 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.081 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.081 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.081 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.081 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.081 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.081 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.081 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.081 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.081 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.081 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.082 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.082 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.082 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.082 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.082 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.082 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.082 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.082 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.082 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.082 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.082 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.082 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.083 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.083 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.083 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.083 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.083 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.083 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.083 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.083 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.102 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.103 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.104 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.238 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpyhfnt3zu/privsep.sock']
Jan 27 15:03:58 compute-0 sudo[220883]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpyhfnt3zu/privsep.sock
Jan 27 15:03:58 compute-0 sudo[220883]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 15:03:58 compute-0 sudo[220883]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 15:03:58 compute-0 python3.9[220936]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 15:03:58 compute-0 sudo[220883]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.889 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.890 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpyhfnt3zu/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.765 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.772 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.776 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.776 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.999 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:03:58 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:58.999 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.001 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.001 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.001 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.001 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.001 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.001 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.002 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.002 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.002 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.002 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.002 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.007 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.007 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.008 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.008 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.008 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.008 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.008 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.008 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.009 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.009 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.009 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.009 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.009 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.010 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.010 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.010 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.011 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.011 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.011 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.011 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.011 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.012 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.012 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.012 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.012 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.012 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.012 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.012 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.012 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.013 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.013 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.013 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.013 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.013 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.013 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.014 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.014 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.014 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.014 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.014 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.014 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.015 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.015 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.015 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.015 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.016 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.016 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.016 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.016 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.016 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.017 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.017 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.017 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.017 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.017 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.017 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.018 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.018 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.018 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.018 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.018 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.018 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.019 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.019 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.019 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.019 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.020 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.020 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.020 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.020 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.020 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.020 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.021 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.021 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.021 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.021 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.021 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.022 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.022 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.022 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.022 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.022 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.023 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.023 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.023 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.023 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.023 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.023 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.023 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.023 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.024 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.024 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.024 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.024 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.024 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.024 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.024 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.025 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.025 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.025 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.025 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.025 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.025 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.025 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.026 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.026 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.026 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.026 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.026 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.026 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.026 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.027 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.027 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.027 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.027 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.027 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.027 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.027 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.028 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.028 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.028 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.028 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.028 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.028 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.028 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.029 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.029 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.029 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.029 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.029 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.029 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.029 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.030 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.030 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.030 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.030 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.030 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.030 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.031 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.031 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.031 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.031 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.031 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.031 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.032 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.032 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.032 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.032 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.032 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.033 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.033 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.033 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.033 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.033 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.033 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.034 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.034 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.034 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.034 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.034 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.034 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.035 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.035 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.035 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.035 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.035 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.036 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.036 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.036 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.036 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.036 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.036 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.036 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.037 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.037 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.037 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.037 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.037 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.038 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.038 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.038 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.038 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.038 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.039 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.039 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.039 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.039 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.039 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.039 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.040 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.040 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.040 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.040 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.040 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.041 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.041 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.041 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.041 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.041 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.041 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.042 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.042 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.042 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 27 15:03:59 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:03:59.045 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 27 15:03:59 compute-0 sudo[221092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pealkkaethofppyncjssmkeoyztfisnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526239.0323973-460-89729599572386/AnsiballZ_stat.py'
Jan 27 15:03:59 compute-0 sudo[221092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:03:59 compute-0 python3.9[221094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:03:59 compute-0 sudo[221092]: pam_unix(sudo:session): session closed for user root
Jan 27 15:03:59 compute-0 podman[201073]: time="2026-01-27T15:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:03:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 24316 "" "Go-http-client/1.1"
Jan 27 15:03:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3424 "" "Go-http-client/1.1"
Jan 27 15:03:59 compute-0 sudo[221217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvrlulhnyqnuzpxkcdkokamtscazljge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526239.0323973-460-89729599572386/AnsiballZ_copy.py'
Jan 27 15:03:59 compute-0 sudo[221217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:00 compute-0 python3.9[221219]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769526239.0323973-460-89729599572386/.source.yaml _original_basename=.8i9vyjvf follow=False checksum=e52aaf0c390a912dc5c7293e635d81900448cc3c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:04:00 compute-0 sudo[221217]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:04:00.213 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:04:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:04:00.214 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:04:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:04:00.214 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:04:00 compute-0 sudo[221382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azqxydcnnimvywwhkqvtzombxnkztchp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526240.4704227-477-253614109114774/AnsiballZ_file.py'
Jan 27 15:04:00 compute-0 sudo[221382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:00 compute-0 podman[221343]: 2026-01-27 15:04:00.775028242 +0000 UTC m=+0.057670956 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:04:00 compute-0 python3.9[221395]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:04:01 compute-0 sudo[221382]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:01 compute-0 openstack_network_exporter[204239]: ERROR   15:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:04:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:04:01 compute-0 openstack_network_exporter[204239]: ERROR   15:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:04:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:04:01 compute-0 sudo[221545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhjdynmqrgeqouoluluniwogroquttxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526241.2231061-485-213642825054615/AnsiballZ_file.py'
Jan 27 15:04:01 compute-0 sudo[221545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:01 compute-0 python3.9[221547]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 15:04:01 compute-0 sudo[221545]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:02 compute-0 python3.9[221697]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/kepler state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:04:04 compute-0 sudo[222118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttvfybsnbcoqtagdhxtglniuzyikghom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526244.0486999-519-158795806014403/AnsiballZ_container_config_data.py'
Jan 27 15:04:04 compute-0 sudo[222118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:04 compute-0 python3.9[222120]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/kepler config_pattern=*.json debug=False
Jan 27 15:04:04 compute-0 sudo[222118]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:05 compute-0 sudo[222270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owvltmcrqfvjnegmkxebzykjxpchfnrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526245.0443554-530-180792433457033/AnsiballZ_container_config_hash.py'
Jan 27 15:04:05 compute-0 sudo[222270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:05 compute-0 python3.9[222272]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 15:04:05 compute-0 sudo[222270]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:06 compute-0 sudo[222422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dajsjlrufbtkkmauvwjvslqsasttosxn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769526246.230644-540-16886446784975/AnsiballZ_edpm_container_manage.py'
Jan 27 15:04:06 compute-0 sudo[222422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:06 compute-0 python3[222424]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/kepler config_id=kepler config_overrides={} config_patterns=*.json containers=['kepler'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 15:04:07 compute-0 podman[222462]: 2026-01-27 15:04:07.138268416 +0000 UTC m=+0.031365207 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 27 15:04:07 compute-0 podman[222462]: 2026-01-27 15:04:07.495828858 +0000 UTC m=+0.388925569 container create 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, version=9.4, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, vcs-type=git, name=ubi9)
Jan 27 15:04:07 compute-0 python3[222424]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_CONTAINER_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env EXPOSE_VM_METRICS=true --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=kepler --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Jan 27 15:04:07 compute-0 sudo[222422]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:08 compute-0 sudo[222650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvkepxgamkdxgsebylvjaymkwvmezqrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526247.8005967-548-200397731778964/AnsiballZ_stat.py'
Jan 27 15:04:08 compute-0 sudo[222650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:08 compute-0 python3.9[222652]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:04:08 compute-0 sudo[222650]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:09 compute-0 sudo[222804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjzmmxrlftjsrpsnssfywcogbutbmift ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526248.7052207-557-169384075354905/AnsiballZ_file.py'
Jan 27 15:04:09 compute-0 sudo[222804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:09 compute-0 python3.9[222806]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:04:09 compute-0 sudo[222804]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:09 compute-0 sudo[222880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aijnivwnkhihutdummedkkxczcvxomya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526248.7052207-557-169384075354905/AnsiballZ_stat.py'
Jan 27 15:04:09 compute-0 sudo[222880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:09 compute-0 python3.9[222882]: ansible-stat Invoked with path=/etc/systemd/system/edpm_kepler_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:04:09 compute-0 sudo[222880]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:10 compute-0 podman[223005]: 2026-01-27 15:04:10.2655039 +0000 UTC m=+0.064574372 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 15:04:10 compute-0 sudo[223048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htejfgeagdvnhkvjpwffcgdqlultygtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526249.7974129-557-110272729437454/AnsiballZ_copy.py'
Jan 27 15:04:10 compute-0 sudo[223048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:10 compute-0 python3.9[223052]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769526249.7974129-557-110272729437454/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:04:10 compute-0 sudo[223048]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:10 compute-0 sudo[223126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcyalkeslkyvctqszktbimmywpawqxfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526249.7974129-557-110272729437454/AnsiballZ_systemd.py'
Jan 27 15:04:10 compute-0 sudo[223126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:11 compute-0 python3.9[223128]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 15:04:11 compute-0 systemd[1]: Reloading.
Jan 27 15:04:11 compute-0 systemd-sysv-generator[223157]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:04:11 compute-0 systemd-rc-local-generator[223150]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:04:11 compute-0 sudo[223126]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:12 compute-0 sudo[223249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcvafkkpomuelnzykipboohxczfblcar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526249.7974129-557-110272729437454/AnsiballZ_systemd.py'
Jan 27 15:04:12 compute-0 sudo[223249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:12 compute-0 podman[223211]: 2026-01-27 15:04:12.106484469 +0000 UTC m=+0.103921724 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20260126)
Jan 27 15:04:12 compute-0 python3.9[223255]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 15:04:12 compute-0 systemd[1]: Reloading.
Jan 27 15:04:12 compute-0 podman[223260]: 2026-01-27 15:04:12.552853816 +0000 UTC m=+0.093513952 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:04:12 compute-0 systemd-rc-local-generator[223314]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 15:04:12 compute-0 systemd-sysv-generator[223317]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 15:04:12 compute-0 systemd[1]: Starting kepler container...
Jan 27 15:04:12 compute-0 nova_compute[185191]: 2026-01-27 15:04:12.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:04:13 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:04:13 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039.
Jan 27 15:04:13 compute-0 podman[223325]: 2026-01-27 15:04:13.634641919 +0000 UTC m=+0.754347004 container init 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.expose-services=, version=9.4, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release-0.7.12=, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:04:13 compute-0 kepler[223340]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 27 15:04:13 compute-0 kepler[223340]: I0127 15:04:13.667081       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 27 15:04:13 compute-0 kepler[223340]: I0127 15:04:13.667256       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 27 15:04:13 compute-0 kepler[223340]: I0127 15:04:13.667301       1 config.go:295] kernel version: 5.14
Jan 27 15:04:13 compute-0 kepler[223340]: I0127 15:04:13.668128       1 power.go:78] Unable to obtain power, use estimate method
Jan 27 15:04:13 compute-0 kepler[223340]: I0127 15:04:13.668156       1 redfish.go:169] failed to get redfish credential file path
Jan 27 15:04:13 compute-0 kepler[223340]: I0127 15:04:13.668550       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 27 15:04:13 compute-0 kepler[223340]: I0127 15:04:13.668565       1 power.go:79] using none to obtain power
Jan 27 15:04:13 compute-0 kepler[223340]: E0127 15:04:13.668582       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 27 15:04:13 compute-0 kepler[223340]: E0127 15:04:13.668603       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 27 15:04:13 compute-0 podman[223325]: 2026-01-27 15:04:13.668702208 +0000 UTC m=+0.788407293 container start 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_id=kepler, container_name=kepler, distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, version=9.4)
Jan 27 15:04:13 compute-0 kepler[223340]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 27 15:04:13 compute-0 kepler[223340]: I0127 15:04:13.670901       1 exporter.go:84] Number of CPUs: 8
Jan 27 15:04:13 compute-0 podman[223325]: kepler
Jan 27 15:04:13 compute-0 systemd[1]: Started kepler container.
Jan 27 15:04:13 compute-0 podman[223343]: 2026-01-27 15:04:13.767096471 +0000 UTC m=+0.225453921 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, architecture=x86_64, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 27 15:04:13 compute-0 sudo[223249]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:13 compute-0 podman[223361]: 2026-01-27 15:04:13.823532023 +0000 UTC m=+0.136359698 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, container_name=kepler, managed_by=edpm_ansible, name=ubi9, version=9.4, config_id=kepler, release=1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 27 15:04:13 compute-0 systemd[1]: 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039-2958ff2142d8c4b2.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 15:04:13 compute-0 systemd[1]: 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039-2958ff2142d8c4b2.service: Failed with result 'exit-code'.
Jan 27 15:04:13 compute-0 nova_compute[185191]: 2026-01-27 15:04:13.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:04:13 compute-0 nova_compute[185191]: 2026-01-27 15:04:13.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:04:13 compute-0 nova_compute[185191]: 2026-01-27 15:04:13.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:04:13 compute-0 nova_compute[185191]: 2026-01-27 15:04:13.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:04:14 compute-0 nova_compute[185191]: 2026-01-27 15:04:14.073 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:04:14 compute-0 nova_compute[185191]: 2026-01-27 15:04:14.074 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:04:14 compute-0 nova_compute[185191]: 2026-01-27 15:04:14.074 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:04:14 compute-0 nova_compute[185191]: 2026-01-27 15:04:14.074 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:04:14 compute-0 nova_compute[185191]: 2026-01-27 15:04:14.272 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:04:14 compute-0 nova_compute[185191]: 2026-01-27 15:04:14.274 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5747MB free_disk=72.4774284362793GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:04:14 compute-0 nova_compute[185191]: 2026-01-27 15:04:14.274 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:04:14 compute-0 nova_compute[185191]: 2026-01-27 15:04:14.274 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.298817       1 watcher.go:83] Using in cluster k8s config
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.298863       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 27 15:04:14 compute-0 kepler[223340]: E0127 15:04:14.298946       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.306531       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.306579       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.310214       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.310250       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.319100       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.319145       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.319160       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.326770       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.326804       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.326808       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.326812       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.326819       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.326831       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.326910       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.326936       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.326955       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.326971       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.327106       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 27 15:04:14 compute-0 kepler[223340]: I0127 15:04:14.327441       1 exporter.go:208] Started Kepler in 660.610886ms
Jan 27 15:04:14 compute-0 nova_compute[185191]: 2026-01-27 15:04:14.801 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:04:14 compute-0 nova_compute[185191]: 2026-01-27 15:04:14.802 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:04:14 compute-0 python3.9[223554]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 15:04:14 compute-0 nova_compute[185191]: 2026-01-27 15:04:14.834 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:04:14 compute-0 nova_compute[185191]: 2026-01-27 15:04:14.868 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:04:14 compute-0 nova_compute[185191]: 2026-01-27 15:04:14.870 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:04:14 compute-0 nova_compute[185191]: 2026-01-27 15:04:14.870 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:04:15 compute-0 nova_compute[185191]: 2026-01-27 15:04:15.873 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:04:15 compute-0 nova_compute[185191]: 2026-01-27 15:04:15.873 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:04:15 compute-0 nova_compute[185191]: 2026-01-27 15:04:15.873 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:04:15 compute-0 nova_compute[185191]: 2026-01-27 15:04:15.962 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:04:15 compute-0 nova_compute[185191]: 2026-01-27 15:04:15.963 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:04:15 compute-0 nova_compute[185191]: 2026-01-27 15:04:15.963 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:04:15 compute-0 nova_compute[185191]: 2026-01-27 15:04:15.963 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:04:16 compute-0 nova_compute[185191]: 2026-01-27 15:04:16.029 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:04:16 compute-0 sudo[223704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxijnfawivcgpseqhzpjvmfqacoenrlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526255.4330485-602-175057300122416/AnsiballZ_stat.py'
Jan 27 15:04:16 compute-0 sudo[223704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:16 compute-0 python3.9[223706]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:04:16 compute-0 sudo[223704]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:16 compute-0 nova_compute[185191]: 2026-01-27 15:04:16.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:04:17 compute-0 sudo[223829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhlacjvitmsmhwkmpgewzbpqlnqmlqvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526255.4330485-602-175057300122416/AnsiballZ_copy.py'
Jan 27 15:04:17 compute-0 sudo[223829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:17 compute-0 python3.9[223831]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769526255.4330485-602-175057300122416/.source.yaml _original_basename=.brsr2r5w follow=False checksum=6c464d08e9f72a04225948e06edefd5a42f69920 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:04:17 compute-0 sudo[223829]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:17 compute-0 sudo[223981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxsmyrbzbngmrjvdljajwsxdxhcdenor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526257.5360029-617-166167934936811/AnsiballZ_systemd.py'
Jan 27 15:04:18 compute-0 sudo[223981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:18 compute-0 python3.9[223983]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 15:04:18 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Jan 27 15:04:18 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:04:18.790 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Jan 27 15:04:18 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:04:18.896 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Jan 27 15:04:18 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:04:18.897 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Jan 27 15:04:18 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:04:18.897 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Jan 27 15:04:18 compute-0 ceilometer_agent_ipmi[220747]: 2026-01-27 15:04:18.913 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Jan 27 15:04:19 compute-0 systemd[1]: libpod-3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4.scope: Deactivated successfully.
Jan 27 15:04:19 compute-0 systemd[1]: libpod-3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4.scope: Consumed 2.355s CPU time.
Jan 27 15:04:19 compute-0 podman[223987]: 2026-01-27 15:04:19.332187551 +0000 UTC m=+0.884984028 container died 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:04:19 compute-0 systemd[1]: 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4-16520f18fdbcee37.timer: Deactivated successfully.
Jan 27 15:04:19 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4.
Jan 27 15:04:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4-userdata-shm.mount: Deactivated successfully.
Jan 27 15:04:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-87fca9d1cc4f9ae62b758bd5735596c9879d16f9bd08cb7058e2f7a317fe7842-merged.mount: Deactivated successfully.
Jan 27 15:04:20 compute-0 podman[223987]: 2026-01-27 15:04:20.548978845 +0000 UTC m=+2.101775252 container cleanup 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 15:04:20 compute-0 podman[223987]: ceilometer_agent_ipmi
Jan 27 15:04:20 compute-0 podman[224017]: ceilometer_agent_ipmi
Jan 27 15:04:20 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Jan 27 15:04:20 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Jan 27 15:04:20 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 27 15:04:21 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87fca9d1cc4f9ae62b758bd5735596c9879d16f9bd08cb7058e2f7a317fe7842/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 15:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87fca9d1cc4f9ae62b758bd5735596c9879d16f9bd08cb7058e2f7a317fe7842/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 15:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87fca9d1cc4f9ae62b758bd5735596c9879d16f9bd08cb7058e2f7a317fe7842/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 27 15:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87fca9d1cc4f9ae62b758bd5735596c9879d16f9bd08cb7058e2f7a317fe7842/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 27 15:04:21 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4.
Jan 27 15:04:21 compute-0 podman[224028]: 2026-01-27 15:04:21.317538152 +0000 UTC m=+0.630240108 container init 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 15:04:21 compute-0 podman[224028]: 2026-01-27 15:04:21.348767923 +0000 UTC m=+0.661469839 container start 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: + sudo -E kolla_set_configs
Jan 27 15:04:21 compute-0 podman[224028]: ceilometer_agent_ipmi
Jan 27 15:04:21 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 27 15:04:21 compute-0 sudo[224050]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 27 15:04:21 compute-0 sudo[224050]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 15:04:21 compute-0 sudo[224050]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 15:04:21 compute-0 podman[224048]: 2026-01-27 15:04:21.485870121 +0000 UTC m=+0.122141956 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 27 15:04:21 compute-0 sudo[223981]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:21 compute-0 systemd[1]: 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4-2a20172b292d919e.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 15:04:21 compute-0 systemd[1]: 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4-2a20172b292d919e.service: Failed with result 'exit-code'.
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Validating config file
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Copying service configuration files
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: INFO:__main__:Writing out command to execute
Jan 27 15:04:21 compute-0 sudo[224050]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: ++ cat /run_command
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: + ARGS=
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: + sudo kolla_copy_cacerts
Jan 27 15:04:21 compute-0 sudo[224096]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 27 15:04:21 compute-0 sudo[224096]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 15:04:21 compute-0 sudo[224096]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 15:04:21 compute-0 sudo[224096]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: + [[ ! -n '' ]]
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: + . kolla_extend_start
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: + umask 0022
Jan 27 15:04:21 compute-0 ceilometer_agent_ipmi[224043]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 27 15:04:22 compute-0 sudo[224224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szfsacrhjzreifpvvvtdxqtpxyuexjcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526261.731028-625-206534024475058/AnsiballZ_systemd.py'
Jan 27 15:04:22 compute-0 sudo[224224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:22 compute-0 python3.9[224226]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 15:04:22 compute-0 systemd[1]: Stopping kepler container...
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.727 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.728 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.728 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.728 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.728 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.728 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.728 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.728 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.728 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.728 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.729 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.729 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.729 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.729 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.729 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.729 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.729 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.729 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.729 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.730 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.730 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.730 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.730 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.730 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.730 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.730 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.730 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.730 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.730 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.731 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.731 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.731 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.731 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.731 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.731 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.731 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.731 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.731 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.731 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.731 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.732 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.732 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.732 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.732 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.732 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.732 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.732 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.732 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.732 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.732 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.733 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.733 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.733 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.733 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.733 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.733 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.733 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.733 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.733 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.733 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.734 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.734 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.734 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.734 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.734 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.734 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.734 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.734 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.734 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.734 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.737 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.737 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.737 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.737 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.737 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.737 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.737 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.737 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.737 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.737 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.738 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.738 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.738 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.738 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.738 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.738 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.738 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.738 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.738 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.738 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.739 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.739 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.739 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.739 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.739 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.739 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.739 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.739 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.739 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.739 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.739 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.739 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.740 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.740 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.740 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.740 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.740 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.740 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.740 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.740 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.740 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.740 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.740 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.741 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.741 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.741 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.741 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.741 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.741 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.741 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.741 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.741 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.742 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.742 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.742 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.742 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.742 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.742 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.742 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.742 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.742 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.742 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.742 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.743 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.743 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.743 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.743 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.743 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.743 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.743 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.743 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.743 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.743 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.773 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.775 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.776 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 27 15:04:22 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:22.791 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpw032lfl8/privsep.sock']
Jan 27 15:04:22 compute-0 sudo[224245]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpw032lfl8/privsep.sock
Jan 27 15:04:22 compute-0 sudo[224245]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 15:04:22 compute-0 sudo[224245]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 15:04:22 compute-0 kepler[223340]: I0127 15:04:22.898454       1 exporter.go:218] Received shutdown signal
Jan 27 15:04:22 compute-0 kepler[223340]: I0127 15:04:22.899638       1 exporter.go:226] Exiting...
Jan 27 15:04:23 compute-0 systemd[1]: libpod-0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039.scope: Deactivated successfully.
Jan 27 15:04:23 compute-0 podman[224230]: 2026-01-27 15:04:23.107057442 +0000 UTC m=+0.477448607 container died 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, config_id=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 15:04:23 compute-0 systemd[1]: 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039-2958ff2142d8c4b2.timer: Deactivated successfully.
Jan 27 15:04:23 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039.
Jan 27 15:04:23 compute-0 sudo[224245]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.548 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.549 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpw032lfl8/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.391 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.399 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.403 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.403 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 27 15:04:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039-userdata-shm.mount: Deactivated successfully.
Jan 27 15:04:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1740e0bb194c9967bbe45df277ec91f68c58af0a207fe1f5f557a297e087b3f-merged.mount: Deactivated successfully.
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.773 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.774 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.776 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.776 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.776 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.777 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.777 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.777 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.777 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.777 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.778 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.778 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.778 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.784 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.784 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.784 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.785 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.785 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.785 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.785 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.786 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.786 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.786 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.786 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.787 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.787 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.788 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.788 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.788 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.789 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.789 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.789 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.789 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.790 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.790 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.790 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.790 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.791 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.791 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.791 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.791 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.791 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.792 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.792 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.792 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.792 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.793 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.793 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.793 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.793 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.794 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.794 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.794 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.794 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.794 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.795 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.795 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.795 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.795 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.795 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.796 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.796 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.796 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.796 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.796 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.797 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.797 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.797 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.797 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.797 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.798 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.798 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.798 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.798 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.798 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.799 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.799 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.799 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.799 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.799 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.800 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.800 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.800 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.800 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.800 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.801 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.801 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.801 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.801 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.801 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.802 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.802 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.802 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.802 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.802 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.803 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.803 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.803 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.803 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.803 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.803 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.804 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.804 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.804 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.804 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.804 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.805 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.805 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.805 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.805 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.805 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.806 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.806 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.806 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.806 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.806 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.807 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.807 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.807 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.807 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.807 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.808 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.808 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.808 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.808 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.808 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.809 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.809 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.809 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.809 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.809 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.809 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.810 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.810 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.810 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.810 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.810 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.811 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.811 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.811 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.811 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.812 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.812 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.812 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.812 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.812 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.813 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.813 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.813 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.813 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.813 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.814 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.814 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.814 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.814 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.814 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.815 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.815 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.815 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.815 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.815 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.815 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.815 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.815 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.816 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.816 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.816 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.816 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.816 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.816 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.816 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.816 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.817 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.817 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.817 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.817 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.817 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.817 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.817 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.817 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.818 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.818 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.818 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.818 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.818 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.818 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.818 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.818 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.819 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.819 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.819 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.819 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.819 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.819 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.819 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.820 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.820 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.820 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.820 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.820 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.820 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.820 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.821 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.821 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.821 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.821 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.821 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.821 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.821 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.821 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.822 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.822 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.822 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.822 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.822 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.822 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 27 15:04:23 compute-0 ceilometer_agent_ipmi[224043]: 2026-01-27 15:04:23.826 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 27 15:04:23 compute-0 podman[224230]: 2026-01-27 15:04:23.880381067 +0000 UTC m=+1.250772232 container cleanup 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, release-0.7.12=, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.tags=base rhel9, vcs-type=git, name=ubi9, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 27 15:04:23 compute-0 podman[224230]: kepler
Jan 27 15:04:23 compute-0 podman[224270]: kepler
Jan 27 15:04:23 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Jan 27 15:04:23 compute-0 systemd[1]: Stopped kepler container.
Jan 27 15:04:23 compute-0 systemd[1]: Starting kepler container...
Jan 27 15:04:24 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:04:24 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039.
Jan 27 15:04:24 compute-0 podman[224283]: 2026-01-27 15:04:24.596022616 +0000 UTC m=+0.603843585 container init 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, version=9.4, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.buildah.version=1.29.0, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:04:24 compute-0 kepler[224299]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 27 15:04:24 compute-0 podman[224283]: 2026-01-27 15:04:24.624941996 +0000 UTC m=+0.632762935 container start 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9)
Jan 27 15:04:24 compute-0 kepler[224299]: I0127 15:04:24.628331       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 27 15:04:24 compute-0 kepler[224299]: I0127 15:04:24.628448       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 27 15:04:24 compute-0 kepler[224299]: I0127 15:04:24.628463       1 config.go:295] kernel version: 5.14
Jan 27 15:04:24 compute-0 kepler[224299]: I0127 15:04:24.629127       1 power.go:78] Unable to obtain power, use estimate method
Jan 27 15:04:24 compute-0 kepler[224299]: I0127 15:04:24.629156       1 redfish.go:169] failed to get redfish credential file path
Jan 27 15:04:24 compute-0 kepler[224299]: I0127 15:04:24.629556       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 27 15:04:24 compute-0 kepler[224299]: I0127 15:04:24.629569       1 power.go:79] using none to obtain power
Jan 27 15:04:24 compute-0 kepler[224299]: E0127 15:04:24.629581       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 27 15:04:24 compute-0 kepler[224299]: E0127 15:04:24.629595       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 27 15:04:24 compute-0 kepler[224299]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 27 15:04:24 compute-0 kepler[224299]: I0127 15:04:24.631276       1 exporter.go:84] Number of CPUs: 8
Jan 27 15:04:24 compute-0 podman[224283]: kepler
Jan 27 15:04:24 compute-0 systemd[1]: Started kepler container.
Jan 27 15:04:24 compute-0 podman[224302]: 2026-01-27 15:04:24.808775104 +0000 UTC m=+0.561121624 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:04:24 compute-0 sudo[224224]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:24 compute-0 podman[224326]: 2026-01-27 15:04:24.903414036 +0000 UTC m=+0.266670533 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.openshift.tags=base rhel9, architecture=x86_64, release=1214.1726694543, name=ubi9, vcs-type=git, version=9.4, config_id=kepler, vendor=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=)
Jan 27 15:04:24 compute-0 systemd[1]: 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039-b4b3ae01a3e38e9.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 15:04:24 compute-0 systemd[1]: 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039-b4b3ae01a3e38e9.service: Failed with result 'exit-code'.
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.305748       1 watcher.go:83] Using in cluster k8s config
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.305813       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 27 15:04:25 compute-0 kepler[224299]: E0127 15:04:25.305886       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.330894       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.330992       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.336502       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.336564       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.348168       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.348227       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.348250       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.364116       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.364186       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.364196       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.364205       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.364217       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.364245       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.364422       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.364846       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.364930       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.364962       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.365206       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 27 15:04:25 compute-0 kepler[224299]: I0127 15:04:25.365628       1 exporter.go:208] Started Kepler in 737.462268ms
Jan 27 15:04:25 compute-0 sudo[224515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rusxzdfjuwsedlyloaqzypjxbhdudnqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526265.0666995-633-231715364816544/AnsiballZ_find.py'
Jan 27 15:04:25 compute-0 sudo[224515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:25 compute-0 python3.9[224517]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 15:04:25 compute-0 sudo[224515]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:27 compute-0 sudo[224667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdlunakubccxraeexkecmdnsskgbxopv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526266.4285617-643-100362392954544/AnsiballZ_podman_container_info.py'
Jan 27 15:04:27 compute-0 sudo[224667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:27 compute-0 python3.9[224669]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 27 15:04:27 compute-0 sudo[224667]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:28 compute-0 sudo[224831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blvguejcwbaaaaapexnifvrdtyyzmqzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526267.5914667-651-221647819711904/AnsiballZ_podman_container_exec.py'
Jan 27 15:04:28 compute-0 sudo[224831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:28 compute-0 python3.9[224833]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:04:29 compute-0 systemd[1]: Started libpod-conmon-e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014.scope.
Jan 27 15:04:29 compute-0 podman[224834]: 2026-01-27 15:04:29.461799888 +0000 UTC m=+0.845300358 container exec e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 15:04:29 compute-0 podman[201073]: time="2026-01-27T15:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:04:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27276 "" "Go-http-client/1.1"
Jan 27 15:04:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3854 "" "Go-http-client/1.1"
Jan 27 15:04:29 compute-0 podman[224834]: 2026-01-27 15:04:29.782304091 +0000 UTC m=+1.165804511 container exec_died e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 15:04:30 compute-0 sudo[224831]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:30 compute-0 systemd[1]: libpod-conmon-e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014.scope: Deactivated successfully.
Jan 27 15:04:31 compute-0 podman[224987]: 2026-01-27 15:04:31.037944033 +0000 UTC m=+0.089947727 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:04:31 compute-0 sudo[225030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzowgfbhijzqlfzvstjceshojrpdsceq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526270.6094134-659-172480523541308/AnsiballZ_podman_container_exec.py'
Jan 27 15:04:31 compute-0 sudo[225030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:31 compute-0 python3.9[225039]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:04:31 compute-0 openstack_network_exporter[204239]: ERROR   15:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:04:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:04:31 compute-0 openstack_network_exporter[204239]: ERROR   15:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:04:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:04:31 compute-0 systemd[1]: Started libpod-conmon-e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014.scope.
Jan 27 15:04:31 compute-0 podman[225040]: 2026-01-27 15:04:31.505516843 +0000 UTC m=+0.246012236 container exec e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 15:04:31 compute-0 podman[225057]: 2026-01-27 15:04:31.597160694 +0000 UTC m=+0.075530398 container exec_died e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 27 15:04:31 compute-0 podman[225040]: 2026-01-27 15:04:31.788752611 +0000 UTC m=+0.529247954 container exec_died e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 27 15:04:31 compute-0 systemd[1]: libpod-conmon-e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014.scope: Deactivated successfully.
Jan 27 15:04:31 compute-0 sudo[225030]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:32 compute-0 sudo[225218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voucskveqodskojfaexenlfmktwswtea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526272.2233992-667-98176200086105/AnsiballZ_file.py'
Jan 27 15:04:32 compute-0 sudo[225218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:32 compute-0 python3.9[225220]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:04:32 compute-0 sudo[225218]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:33 compute-0 sudo[225370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtjmultiusytduhkaaxfdayzjjlovdsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526273.1273122-676-44975901806630/AnsiballZ_podman_container_info.py'
Jan 27 15:04:33 compute-0 sudo[225370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:33 compute-0 python3.9[225372]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 27 15:04:33 compute-0 sudo[225370]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:34 compute-0 sudo[225532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tclkqqowiwlqvpmcgrcwwwquxzlhobuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526274.0771239-684-106168227629744/AnsiballZ_podman_container_exec.py'
Jan 27 15:04:34 compute-0 sudo[225532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:34 compute-0 python3.9[225534]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:04:34 compute-0 systemd[1]: Started libpod-conmon-ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d.scope.
Jan 27 15:04:34 compute-0 podman[225535]: 2026-01-27 15:04:34.832120455 +0000 UTC m=+0.153812159 container exec ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 27 15:04:34 compute-0 podman[225535]: 2026-01-27 15:04:34.872041251 +0000 UTC m=+0.193732935 container exec_died ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:04:34 compute-0 sudo[225532]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:34 compute-0 systemd[1]: libpod-conmon-ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d.scope: Deactivated successfully.
Jan 27 15:04:35 compute-0 sudo[225714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qomcnkdmqyeslhgnvtevvjrllwufdmuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526275.1434824-692-276818727278212/AnsiballZ_podman_container_exec.py'
Jan 27 15:04:35 compute-0 sudo[225714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:35 compute-0 python3.9[225716]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:04:35 compute-0 systemd[1]: Started libpod-conmon-ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d.scope.
Jan 27 15:04:35 compute-0 podman[225717]: 2026-01-27 15:04:35.86833855 +0000 UTC m=+0.166095451 container exec ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 15:04:36 compute-0 podman[225717]: 2026-01-27 15:04:36.0200175 +0000 UTC m=+0.317774411 container exec_died ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 27 15:04:36 compute-0 systemd[1]: libpod-conmon-ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d.scope: Deactivated successfully.
Jan 27 15:04:36 compute-0 sudo[225714]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:36 compute-0 sudo[225895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnojtdphrsiaxdglvyjgvazbndqylerg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526276.4671054-700-99480234328191/AnsiballZ_file.py'
Jan 27 15:04:36 compute-0 sudo[225895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:37 compute-0 python3.9[225897]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:04:37 compute-0 sudo[225895]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:37 compute-0 sudo[226049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziugorjcffpaqlepalflyrirohwcertz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526277.3527918-709-97885405961022/AnsiballZ_podman_container_info.py'
Jan 27 15:04:37 compute-0 sudo[226049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:38 compute-0 python3.9[226051]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 27 15:04:38 compute-0 sudo[226049]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:38 compute-0 sudo[226212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbjvyqbmsvinimphoquwhjzzyfxqcwik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526278.5011249-717-77883250575909/AnsiballZ_podman_container_exec.py'
Jan 27 15:04:38 compute-0 sudo[226212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:39 compute-0 python3.9[226214]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:04:39 compute-0 systemd[1]: Started libpod-conmon-873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088.scope.
Jan 27 15:04:39 compute-0 podman[226215]: 2026-01-27 15:04:39.297893574 +0000 UTC m=+0.166002690 container exec 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2)
Jan 27 15:04:39 compute-0 podman[226233]: 2026-01-27 15:04:39.415308558 +0000 UTC m=+0.100346112 container exec_died 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 15:04:39 compute-0 podman[226215]: 2026-01-27 15:04:39.493917098 +0000 UTC m=+0.362026234 container exec_died 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 27 15:04:39 compute-0 systemd[1]: libpod-conmon-873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088.scope: Deactivated successfully.
Jan 27 15:04:39 compute-0 sudo[226212]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:40 compute-0 sudo[226395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smcjassflsyylbmpkhxutabsdbhvcptk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526279.8222175-725-123798289446907/AnsiballZ_podman_container_exec.py'
Jan 27 15:04:40 compute-0 sudo[226395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:40 compute-0 python3.9[226397]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:04:40 compute-0 systemd[1]: Started libpod-conmon-873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088.scope.
Jan 27 15:04:40 compute-0 podman[226398]: 2026-01-27 15:04:40.52253169 +0000 UTC m=+0.147469378 container exec 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 15:04:40 compute-0 podman[226398]: 2026-01-27 15:04:40.593505114 +0000 UTC m=+0.218442812 container exec_died 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true)
Jan 27 15:04:40 compute-0 podman[226413]: 2026-01-27 15:04:40.738187326 +0000 UTC m=+0.209337595 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 15:04:40 compute-0 sudo[226395]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:40 compute-0 systemd[1]: libpod-conmon-873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088.scope: Deactivated successfully.
Jan 27 15:04:41 compute-0 sudo[226596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjnreiabhovnxgupjdifobpmhmjqakyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526280.9799378-733-13675037285407/AnsiballZ_file.py'
Jan 27 15:04:41 compute-0 sudo[226596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:41 compute-0 python3.9[226598]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:04:41 compute-0 sudo[226596]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:42 compute-0 sudo[226759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lthjtmvnhvlbnickbyltrufiscjwqdnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526281.8937254-742-30688474356088/AnsiballZ_podman_container_info.py'
Jan 27 15:04:42 compute-0 sudo[226759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:42 compute-0 podman[226722]: 2026-01-27 15:04:42.318598225 +0000 UTC m=+0.072570549 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 15:04:42 compute-0 python3.9[226767]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 27 15:04:42 compute-0 sudo[226759]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:43 compute-0 sudo[226948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uobsngddwjvlajzhzfzmltdegqhmfvtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526282.8306365-750-16073590072259/AnsiballZ_podman_container_exec.py'
Jan 27 15:04:43 compute-0 sudo[226948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:43 compute-0 podman[226904]: 2026-01-27 15:04:43.246746563 +0000 UTC m=+0.108372178 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 15:04:43 compute-0 python3.9[226953]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:04:43 compute-0 systemd[1]: Started libpod-conmon-b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7.scope.
Jan 27 15:04:43 compute-0 podman[226958]: 2026-01-27 15:04:43.576915023 +0000 UTC m=+0.134333842 container exec b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:04:43 compute-0 podman[226958]: 2026-01-27 15:04:43.615367955 +0000 UTC m=+0.172786784 container exec_died b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:04:43 compute-0 sudo[226948]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:43 compute-0 systemd[1]: libpod-conmon-b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7.scope: Deactivated successfully.
Jan 27 15:04:44 compute-0 sudo[227150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qenggbshupnondfnvgzbaipmsdezoffh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526283.9406915-758-229695095096594/AnsiballZ_podman_container_exec.py'
Jan 27 15:04:44 compute-0 sudo[227150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:44 compute-0 podman[227110]: 2026-01-27 15:04:44.366116316 +0000 UTC m=+0.124744173 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, architecture=x86_64, name=ubi9-minimal, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7)
Jan 27 15:04:44 compute-0 python3.9[227154]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:04:44 compute-0 systemd[1]: Started libpod-conmon-b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7.scope.
Jan 27 15:04:44 compute-0 podman[227158]: 2026-01-27 15:04:44.66068924 +0000 UTC m=+0.125912204 container exec b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:04:44 compute-0 podman[227175]: 2026-01-27 15:04:44.731548522 +0000 UTC m=+0.056980726 container exec_died b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:04:44 compute-0 podman[227158]: 2026-01-27 15:04:44.753615899 +0000 UTC m=+0.218838833 container exec_died b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:04:44 compute-0 systemd[1]: libpod-conmon-b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7.scope: Deactivated successfully.
Jan 27 15:04:44 compute-0 sudo[227150]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:45 compute-0 sudo[227336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riqcijukypgvyqmenvpjyfanaienblif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526285.013099-766-177629693132040/AnsiballZ_file.py'
Jan 27 15:04:45 compute-0 sudo[227336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:45 compute-0 python3.9[227338]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:04:45 compute-0 sudo[227336]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:46 compute-0 sudo[227488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idfnacocbxtrslmzsimdeugsacwhcizn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526285.8592982-775-151915802669890/AnsiballZ_podman_container_info.py'
Jan 27 15:04:46 compute-0 sudo[227488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:46 compute-0 python3.9[227490]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 27 15:04:46 compute-0 sudo[227488]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:47 compute-0 sudo[227652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvcqqqygnsrlpwpoafnhfbvvofwhqakv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526286.7597098-783-103802310908775/AnsiballZ_podman_container_exec.py'
Jan 27 15:04:47 compute-0 sudo[227652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:47 compute-0 python3.9[227654]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:04:47 compute-0 systemd[1]: Started libpod-conmon-34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1.scope.
Jan 27 15:04:47 compute-0 podman[227655]: 2026-01-27 15:04:47.739253329 +0000 UTC m=+0.211825933 container exec 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:04:47 compute-0 podman[227674]: 2026-01-27 15:04:47.826086923 +0000 UTC m=+0.068765086 container exec_died 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:04:47 compute-0 podman[227655]: 2026-01-27 15:04:47.899286747 +0000 UTC m=+0.371859351 container exec_died 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:04:47 compute-0 systemd[1]: libpod-conmon-34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1.scope: Deactivated successfully.
Jan 27 15:04:47 compute-0 sudo[227652]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:48 compute-0 sudo[227835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxtvigprzikokqtnhwhmosmvdorrypms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526288.2002287-791-43772131384162/AnsiballZ_podman_container_exec.py'
Jan 27 15:04:48 compute-0 sudo[227835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:48 compute-0 python3.9[227837]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:04:49 compute-0 systemd[1]: Started libpod-conmon-34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1.scope.
Jan 27 15:04:49 compute-0 podman[227838]: 2026-01-27 15:04:49.179124108 +0000 UTC m=+0.310231701 container exec 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:04:49 compute-0 podman[227838]: 2026-01-27 15:04:49.237046738 +0000 UTC m=+0.368154341 container exec_died 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:04:49 compute-0 sudo[227835]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:49 compute-0 systemd[1]: libpod-conmon-34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1.scope: Deactivated successfully.
Jan 27 15:04:50 compute-0 sudo[228018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvkfhmqqvtbzmynsxmhljbbvgqowkyhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526289.6670783-799-71521324112511/AnsiballZ_file.py'
Jan 27 15:04:50 compute-0 sudo[228018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:50 compute-0 python3.9[228020]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:04:50 compute-0 sudo[228018]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:50 compute-0 sudo[228170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkozdkyifwzwcjzxvgzwkyrwcsghkymb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526290.547749-808-186586687707996/AnsiballZ_podman_container_info.py'
Jan 27 15:04:50 compute-0 sudo[228170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:51 compute-0 python3.9[228172]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 27 15:04:51 compute-0 sudo[228170]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:51 compute-0 sudo[228347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypzudqsvdjowayzjqmepveliwpzkkwyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526291.445326-816-272793674528120/AnsiballZ_podman_container_exec.py'
Jan 27 15:04:51 compute-0 sudo[228347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:51 compute-0 podman[228308]: 2026-01-27 15:04:51.871678094 +0000 UTC m=+0.078428647 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 27 15:04:51 compute-0 systemd[1]: 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4-2a20172b292d919e.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 15:04:51 compute-0 systemd[1]: 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4-2a20172b292d919e.service: Failed with result 'exit-code'.
Jan 27 15:04:52 compute-0 python3.9[228352]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:04:52 compute-0 systemd[1]: Started libpod-conmon-f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931.scope.
Jan 27 15:04:52 compute-0 podman[228355]: 2026-01-27 15:04:52.517261973 +0000 UTC m=+0.320695185 container exec f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, version=9.6, release=1755695350)
Jan 27 15:04:52 compute-0 podman[228374]: 2026-01-27 15:04:52.673241921 +0000 UTC m=+0.115030459 container exec_died f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Jan 27 15:04:52 compute-0 podman[228355]: 2026-01-27 15:04:52.755410228 +0000 UTC m=+0.558843440 container exec_died f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Jan 27 15:04:52 compute-0 systemd[1]: libpod-conmon-f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931.scope: Deactivated successfully.
Jan 27 15:04:53 compute-0 sudo[228347]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:54 compute-0 sudo[228536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-misnxwgtckamvhwnzusgnstjqneqsqrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526293.623052-824-152659578510614/AnsiballZ_podman_container_exec.py'
Jan 27 15:04:54 compute-0 sudo[228536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:54 compute-0 python3.9[228538]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:04:54 compute-0 systemd[1]: Started libpod-conmon-f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931.scope.
Jan 27 15:04:54 compute-0 podman[228539]: 2026-01-27 15:04:54.623889536 +0000 UTC m=+0.338266461 container exec f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1755695350, io.openshift.expose-services=, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 15:04:54 compute-0 podman[228559]: 2026-01-27 15:04:54.752354357 +0000 UTC m=+0.111851942 container exec_died f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.33.7, distribution-scope=public, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Jan 27 15:04:54 compute-0 podman[228539]: 2026-01-27 15:04:54.951864086 +0000 UTC m=+0.666241031 container exec_died f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, container_name=openstack_network_exporter, name=ubi9-minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:04:54 compute-0 systemd[1]: libpod-conmon-f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931.scope: Deactivated successfully.
Jan 27 15:04:55 compute-0 sudo[228536]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:55 compute-0 podman[228571]: 2026-01-27 15:04:55.143966903 +0000 UTC m=+0.114427213 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:04:55 compute-0 podman[228570]: 2026-01-27 15:04:55.16085455 +0000 UTC m=+0.144235790 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, distribution-scope=public)
Jan 27 15:04:55 compute-0 sudo[228760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftklebeschanekvvjovdirinthryrxbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526295.3344843-832-195353027487759/AnsiballZ_file.py'
Jan 27 15:04:55 compute-0 sudo[228760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:55 compute-0 python3.9[228762]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:04:56 compute-0 sudo[228760]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:56 compute-0 sudo[228912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wywjrhvspdwzzgurnkejpjzmdzkjnjio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526296.3097603-841-259094126835897/AnsiballZ_podman_container_info.py'
Jan 27 15:04:56 compute-0 sudo[228912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:57 compute-0 python3.9[228914]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Jan 27 15:04:57 compute-0 sudo[228912]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:57 compute-0 sudo[229076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckcrgnwbhlplgvuelgudyxenepsoiudg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526297.4035437-849-39508064191530/AnsiballZ_podman_container_exec.py'
Jan 27 15:04:57 compute-0 sudo[229076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:04:58 compute-0 python3.9[229078]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:04:58 compute-0 systemd[1]: Started libpod-conmon-3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4.scope.
Jan 27 15:04:58 compute-0 podman[229079]: 2026-01-27 15:04:58.877102793 +0000 UTC m=+0.788468964 container exec 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 15:04:59 compute-0 podman[229079]: 2026-01-27 15:04:59.048885479 +0000 UTC m=+0.960251630 container exec_died 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Jan 27 15:04:59 compute-0 systemd[1]: libpod-conmon-3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4.scope: Deactivated successfully.
Jan 27 15:04:59 compute-0 sudo[229076]: pam_unix(sudo:session): session closed for user root
Jan 27 15:04:59 compute-0 podman[201073]: time="2026-01-27T15:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:04:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 15:04:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3862 "" "Go-http-client/1.1"
Jan 27 15:04:59 compute-0 sudo[229259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugqzjifhrfulmydxjgetmlkvgbyolhvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526299.4567635-857-193163818418036/AnsiballZ_podman_container_exec.py'
Jan 27 15:04:59 compute-0 sudo[229259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:00 compute-0 python3.9[229261]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:05:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:05:00.214 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:05:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:05:00.216 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:05:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:05:00.216 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:05:00 compute-0 systemd[1]: Started libpod-conmon-3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4.scope.
Jan 27 15:05:00 compute-0 podman[229262]: 2026-01-27 15:05:00.434895468 +0000 UTC m=+0.382980772 container exec 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 15:05:00 compute-0 podman[229262]: 2026-01-27 15:05:00.710041447 +0000 UTC m=+0.658126751 container exec_died 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 15:05:01 compute-0 systemd[1]: libpod-conmon-3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4.scope: Deactivated successfully.
Jan 27 15:05:01 compute-0 sudo[229259]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:01 compute-0 podman[229292]: 2026-01-27 15:05:01.265059581 +0000 UTC m=+0.101821201 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:05:01 compute-0 openstack_network_exporter[204239]: ERROR   15:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:05:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:05:01 compute-0 openstack_network_exporter[204239]: ERROR   15:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:05:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:05:01 compute-0 sudo[229463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojauzkqquwmdwqqjaapljoafqnlsfnxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526301.3681774-865-155547412429126/AnsiballZ_file.py'
Jan 27 15:05:01 compute-0 sudo[229463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:01 compute-0 python3.9[229465]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:02 compute-0 sudo[229463]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:02 compute-0 sudo[229615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvixusvgqwemavosaeveonabdkdjvvbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526302.2695289-874-212804199415769/AnsiballZ_podman_container_info.py'
Jan 27 15:05:02 compute-0 sudo[229615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:02 compute-0 python3.9[229617]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Jan 27 15:05:02 compute-0 sudo[229615]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:03 compute-0 sudo[229779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjydjkvccpirgnuelntfyvxylbmpoxkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526303.212336-882-219617914167401/AnsiballZ_podman_container_exec.py'
Jan 27 15:05:03 compute-0 sudo[229779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:03 compute-0 python3.9[229781]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:05:04 compute-0 systemd[1]: Started libpod-conmon-0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039.scope.
Jan 27 15:05:04 compute-0 podman[229782]: 2026-01-27 15:05:04.079104379 +0000 UTC m=+0.252699661 container exec 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, version=9.4, managed_by=edpm_ansible, name=ubi9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 27 15:05:04 compute-0 podman[229800]: 2026-01-27 15:05:04.167998569 +0000 UTC m=+0.073386100 container exec_died 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, release=1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=kepler, version=9.4, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-container)
Jan 27 15:05:04 compute-0 podman[229782]: 2026-01-27 15:05:04.231626883 +0000 UTC m=+0.405222165 container exec_died 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, io.buildah.version=1.29.0, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release=1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, maintainer=Red Hat, Inc., config_id=kepler, container_name=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4)
Jan 27 15:05:04 compute-0 systemd[1]: libpod-conmon-0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039.scope: Deactivated successfully.
Jan 27 15:05:04 compute-0 sudo[229779]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:04 compute-0 sudo[229962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caruzyrzdkvzsqivmljdecftupfjznep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526304.6328552-890-12177299029496/AnsiballZ_podman_container_exec.py'
Jan 27 15:05:04 compute-0 sudo[229962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:05 compute-0 python3.9[229964]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 15:05:05 compute-0 systemd[1]: Started libpod-conmon-0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039.scope.
Jan 27 15:05:05 compute-0 podman[229965]: 2026-01-27 15:05:05.394803983 +0000 UTC m=+0.182025805 container exec 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.29.0, name=ubi9, com.redhat.component=ubi9-container, version=9.4, build-date=2024-09-18T21:23:30, release-0.7.12=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler)
Jan 27 15:05:05 compute-0 podman[229984]: 2026-01-27 15:05:05.476063126 +0000 UTC m=+0.068194510 container exec_died 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release=1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 27 15:05:05 compute-0 podman[229965]: 2026-01-27 15:05:05.495524484 +0000 UTC m=+0.282746286 container exec_died 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, io.openshift.tags=base rhel9, container_name=kepler, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, name=ubi9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 15:05:05 compute-0 systemd[1]: libpod-conmon-0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039.scope: Deactivated successfully.
Jan 27 15:05:05 compute-0 sudo[229962]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:06 compute-0 sudo[230147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ondzjuvfvzqqtxgjvtugbjuaunwugbhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526305.7392054-898-35267889536914/AnsiballZ_file.py'
Jan 27 15:05:06 compute-0 sudo[230147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:06 compute-0 python3.9[230149]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:06 compute-0 sudo[230147]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:07 compute-0 sudo[230299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajrylopnjtvlqqwqashqmgggrfaonnnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526306.900704-907-129198833269713/AnsiballZ_file.py'
Jan 27 15:05:07 compute-0 sudo[230299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:07 compute-0 python3.9[230301]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:07 compute-0 sudo[230299]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:08 compute-0 sudo[230451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxlcatesdotfdygfwkmcutzqhcbuwwvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526307.6888957-915-147543137743380/AnsiballZ_stat.py'
Jan 27 15:05:08 compute-0 sudo[230451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:08 compute-0 python3.9[230453]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:05:08 compute-0 sudo[230451]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:08 compute-0 sudo[230574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lssyjqpodnqcuqsvmknmxxzgsgorrapx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526307.6888957-915-147543137743380/AnsiballZ_copy.py'
Jan 27 15:05:08 compute-0 sudo[230574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:08 compute-0 python3.9[230576]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769526307.6888957-915-147543137743380/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:08 compute-0 sudo[230574]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:09 compute-0 sudo[230726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmouddjzogdabkgboiagqhlxbeogokzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526309.321234-931-232120674438194/AnsiballZ_file.py'
Jan 27 15:05:09 compute-0 sudo[230726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:09 compute-0 python3.9[230728]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:09 compute-0 sudo[230726]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:10 compute-0 sudo[230878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbrczvpyixnlspyysibxiikbjlghhosg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526310.1492398-939-263389249829789/AnsiballZ_stat.py'
Jan 27 15:05:10 compute-0 sudo[230878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:10 compute-0 python3.9[230880]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:05:10 compute-0 sudo[230878]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:10 compute-0 podman[230881]: 2026-01-27 15:05:10.861511445 +0000 UTC m=+0.071052367 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 15:05:10 compute-0 nova_compute[185191]: 2026-01-27 15:05:10.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:05:10 compute-0 nova_compute[185191]: 2026-01-27 15:05:10.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 15:05:10 compute-0 nova_compute[185191]: 2026-01-27 15:05:10.975 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 15:05:10 compute-0 nova_compute[185191]: 2026-01-27 15:05:10.976 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:05:10 compute-0 nova_compute[185191]: 2026-01-27 15:05:10.977 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.981 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.983 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.003 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.bytes.delta': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.004 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.bytes.delta': [], 'disk.device.read.latency': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.bytes.delta': [], 'disk.device.read.latency': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.006 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.bytes.delta': [], 'disk.device.read.latency': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.006 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.006 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.008 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.008 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.008 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.008 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.008 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.008 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.009 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.009 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.009 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.009 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.009 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.011 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.011 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.011 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.011 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.011 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.012 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:05:11.012 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:05:11 compute-0 nova_compute[185191]: 2026-01-27 15:05:11.085 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:05:11 compute-0 sudo[230978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikgkbrngzdiuglstillgknnnmxgmtbek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526310.1492398-939-263389249829789/AnsiballZ_file.py'
Jan 27 15:05:11 compute-0 sudo[230978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:11 compute-0 python3.9[230980]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:11 compute-0 sudo[230978]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:11 compute-0 sudo[231130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bciyslyyepuomspdwmgqzjvnnfmabalx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526311.5606253-951-149603820483376/AnsiballZ_stat.py'
Jan 27 15:05:11 compute-0 sudo[231130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:12 compute-0 python3.9[231132]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:05:12 compute-0 sudo[231130]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:12 compute-0 sudo[231222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veuubnguehrgekcdjgblxblqtppdpuhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526311.5606253-951-149603820483376/AnsiballZ_file.py'
Jan 27 15:05:12 compute-0 sudo[231222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:12 compute-0 podman[231182]: 2026-01-27 15:05:12.550037955 +0000 UTC m=+0.078623872 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 27 15:05:12 compute-0 python3.9[231228]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.kud8ce16 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:12 compute-0 sudo[231222]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:13 compute-0 sudo[231398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umdoyejfcospifnibajperajfgfamvpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526312.9829056-963-161328867904660/AnsiballZ_stat.py'
Jan 27 15:05:13 compute-0 sudo[231398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:13 compute-0 podman[231353]: 2026-01-27 15:05:13.438252212 +0000 UTC m=+0.113639652 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 15:05:13 compute-0 python3.9[231403]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:05:13 compute-0 sudo[231398]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:13 compute-0 sudo[231482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgjiwqjlzsnzdgxrahemdhjnzdtrdvif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526312.9829056-963-161328867904660/AnsiballZ_file.py'
Jan 27 15:05:13 compute-0 sudo[231482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.105 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.106 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.106 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.106 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.135 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.135 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.135 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.136 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:05:14 compute-0 python3.9[231484]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:14 compute-0 sudo[231482]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.533 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.534 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5656MB free_disk=72.47720336914062GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.534 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.535 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:05:14 compute-0 podman[231565]: 2026-01-27 15:05:14.794260037 +0000 UTC m=+0.118487373 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., version=9.6, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, build-date=2025-08-20T13:12:41)
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.848 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.849 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:05:14 compute-0 sudo[231654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eulwzkjucetdwyvpqzxsqsxigmkrnlvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526314.4965096-976-31738963239075/AnsiballZ_command.py'
Jan 27 15:05:14 compute-0 sudo[231654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.943 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing inventories for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.965 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating ProviderTree inventory for provider dbf037fd-3291-487b-ae9c-69178dae2ebc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.966 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 15:05:14 compute-0 nova_compute[185191]: 2026-01-27 15:05:14.990 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing aggregate associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 15:05:15 compute-0 nova_compute[185191]: 2026-01-27 15:05:15.015 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing trait associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, traits: HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_ACCELERATORS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 15:05:15 compute-0 nova_compute[185191]: 2026-01-27 15:05:15.110 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:05:15 compute-0 python3.9[231656]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:05:15 compute-0 nova_compute[185191]: 2026-01-27 15:05:15.160 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:05:15 compute-0 nova_compute[185191]: 2026-01-27 15:05:15.162 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:05:15 compute-0 nova_compute[185191]: 2026-01-27 15:05:15.163 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:05:15 compute-0 sudo[231654]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:15 compute-0 sudo[231807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skjivvnfvhvllexdxzzouezhiijsdpue ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769526315.3796158-984-275649942646295/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 15:05:15 compute-0 sudo[231807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:16 compute-0 nova_compute[185191]: 2026-01-27 15:05:16.001 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:05:16 compute-0 nova_compute[185191]: 2026-01-27 15:05:16.001 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:05:16 compute-0 nova_compute[185191]: 2026-01-27 15:05:16.002 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:05:16 compute-0 nova_compute[185191]: 2026-01-27 15:05:16.002 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:05:16 compute-0 nova_compute[185191]: 2026-01-27 15:05:16.017 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:05:16 compute-0 nova_compute[185191]: 2026-01-27 15:05:16.017 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:05:16 compute-0 nova_compute[185191]: 2026-01-27 15:05:16.017 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:05:16 compute-0 nova_compute[185191]: 2026-01-27 15:05:16.017 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:05:16 compute-0 python3[231809]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 15:05:16 compute-0 sudo[231807]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:16 compute-0 sudo[231959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pesbcaobzbbowtszefxwjjiffihedibm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526316.3789926-992-273652767272249/AnsiballZ_stat.py'
Jan 27 15:05:16 compute-0 sudo[231959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:16 compute-0 python3.9[231961]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:05:16 compute-0 sudo[231959]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:17 compute-0 sudo[232037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipzjerciciyotcpatitgybcdnvrxaxbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526316.3789926-992-273652767272249/AnsiballZ_file.py'
Jan 27 15:05:17 compute-0 sudo[232037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:17 compute-0 python3.9[232039]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:17 compute-0 sudo[232037]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:18 compute-0 sudo[232189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfiydwcqsptesdbsnalvwxlqcihitjjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526317.7119188-1004-49440676148359/AnsiballZ_stat.py'
Jan 27 15:05:18 compute-0 sudo[232189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:18 compute-0 python3.9[232191]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:05:18 compute-0 sudo[232189]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:18 compute-0 sudo[232267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcueuhtcyuitrjfoameycacgsdvocbmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526317.7119188-1004-49440676148359/AnsiballZ_file.py'
Jan 27 15:05:18 compute-0 sudo[232267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:18 compute-0 python3.9[232269]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:18 compute-0 sudo[232267]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:18 compute-0 nova_compute[185191]: 2026-01-27 15:05:18.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:05:19 compute-0 sudo[232420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otjerjhbczdhqralbinzkrxjnfkrreab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526319.0611298-1016-53115416962139/AnsiballZ_stat.py'
Jan 27 15:05:19 compute-0 sudo[232420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:19 compute-0 python3.9[232422]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:05:19 compute-0 sudo[232420]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:20 compute-0 sudo[232498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yduuyzyuyuxtyrkvrbzmyxpdhdkdsmza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526319.0611298-1016-53115416962139/AnsiballZ_file.py'
Jan 27 15:05:20 compute-0 sudo[232498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:20 compute-0 python3.9[232500]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:20 compute-0 sudo[232498]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:20 compute-0 sudo[232650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tizheoivmvlrnrohtrwnguqjodxwqjsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526320.543698-1028-162828106202458/AnsiballZ_stat.py'
Jan 27 15:05:20 compute-0 sudo[232650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:21 compute-0 python3.9[232652]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:05:21 compute-0 sudo[232650]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:21 compute-0 sudo[232728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azvyhlmbijntzdmjkyiuihrjzurgakwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526320.543698-1028-162828106202458/AnsiballZ_file.py'
Jan 27 15:05:21 compute-0 sudo[232728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:21 compute-0 python3.9[232730]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:21 compute-0 sudo[232728]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:22 compute-0 podman[232830]: 2026-01-27 15:05:22.351013133 +0000 UTC m=+0.103044185 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202)
Jan 27 15:05:22 compute-0 sudo[232900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwhseubktzzznqynhzhboxrffdyewymp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526321.948594-1040-252793727899377/AnsiballZ_stat.py'
Jan 27 15:05:22 compute-0 sudo[232900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:22 compute-0 python3.9[232902]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:05:22 compute-0 sudo[232900]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:23 compute-0 sudo[233025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxsyrixrhgczpdfhxltgifxeetslxmbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526321.948594-1040-252793727899377/AnsiballZ_copy.py'
Jan 27 15:05:23 compute-0 sudo[233025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:23 compute-0 python3.9[233027]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769526321.948594-1040-252793727899377/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:23 compute-0 sudo[233025]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:24 compute-0 sudo[233177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckhoqildxbrctjuyryaeuomzfqfbahwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526323.6533575-1055-198698642942347/AnsiballZ_file.py'
Jan 27 15:05:24 compute-0 sudo[233177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:24 compute-0 python3.9[233179]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:24 compute-0 sudo[233177]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:24 compute-0 sudo[233329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ablxlfditpmejeastzzplwqocaommbhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526324.4887109-1063-144632147621191/AnsiballZ_command.py'
Jan 27 15:05:24 compute-0 sudo[233329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:25 compute-0 python3.9[233331]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:05:25 compute-0 sudo[233329]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:25 compute-0 podman[233338]: 2026-01-27 15:05:25.323410233 +0000 UTC m=+0.075974380 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.buildah.version=1.29.0, architecture=x86_64, config_id=kepler, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, distribution-scope=public, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:05:25 compute-0 podman[233344]: 2026-01-27 15:05:25.3321638 +0000 UTC m=+0.070116811 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:05:25 compute-0 sudo[233526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efwqaywdvnufpaifvycbwrmjwoetgcgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526325.3988578-1071-231806094077700/AnsiballZ_blockinfile.py'
Jan 27 15:05:25 compute-0 sudo[233526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:26 compute-0 python3.9[233528]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:26 compute-0 sudo[233526]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:26 compute-0 sudo[233678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oikosgqjlsdgvaqqrjuonldwralshqzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526326.5298963-1080-185076871372999/AnsiballZ_command.py'
Jan 27 15:05:26 compute-0 sudo[233678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:27 compute-0 python3.9[233680]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:05:27 compute-0 sudo[233678]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:28 compute-0 sudo[233831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaunbchjsnhvdwylfifsxohpvngmnqer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526327.6598568-1088-183343426149569/AnsiballZ_stat.py'
Jan 27 15:05:28 compute-0 sudo[233831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:28 compute-0 python3.9[233833]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 15:05:28 compute-0 sudo[233831]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:28 compute-0 sudo[233985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wppjuapnqwwflvsmdgqrpckweogeirvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526328.554491-1096-130992023482139/AnsiballZ_command.py'
Jan 27 15:05:28 compute-0 sudo[233985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:29 compute-0 python3.9[233987]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:05:29 compute-0 sudo[233985]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:29 compute-0 podman[201073]: time="2026-01-27T15:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:05:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:05:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3870 "" "Go-http-client/1.1"
Jan 27 15:05:29 compute-0 sudo[234140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diqvzzansjeokvjqdstfuanmbkfvfmic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526329.463041-1104-253172402719954/AnsiballZ_file.py'
Jan 27 15:05:29 compute-0 sudo[234140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:30 compute-0 python3.9[234142]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:30 compute-0 sudo[234140]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:30 compute-0 sshd-session[213041]: Connection closed by 192.168.122.30 port 53586
Jan 27 15:05:30 compute-0 sshd-session[213038]: pam_unix(sshd:session): session closed for user zuul
Jan 27 15:05:30 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Jan 27 15:05:30 compute-0 systemd[1]: session-27.scope: Consumed 1min 34.076s CPU time.
Jan 27 15:05:30 compute-0 systemd-logind[820]: Session 27 logged out. Waiting for processes to exit.
Jan 27 15:05:30 compute-0 systemd-logind[820]: Removed session 27.
Jan 27 15:05:31 compute-0 openstack_network_exporter[204239]: ERROR   15:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:05:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:05:31 compute-0 openstack_network_exporter[204239]: ERROR   15:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:05:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:05:32 compute-0 podman[234167]: 2026-01-27 15:05:32.325015471 +0000 UTC m=+0.080453542 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:05:37 compute-0 sshd-session[234192]: Accepted publickey for zuul from 192.168.122.30 port 53000 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 15:05:37 compute-0 systemd-logind[820]: New session 28 of user zuul.
Jan 27 15:05:38 compute-0 systemd[1]: Started Session 28 of User zuul.
Jan 27 15:05:38 compute-0 sshd-session[234192]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 15:05:39 compute-0 python3.9[234345]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 15:05:40 compute-0 sudo[234500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvhnhirqgdcpajjjgstxjrnvmpnfsiwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526339.8733037-29-34892885301562/AnsiballZ_systemd.py'
Jan 27 15:05:40 compute-0 sudo[234500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:41 compute-0 python3.9[234502]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Jan 27 15:05:41 compute-0 sudo[234500]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:41 compute-0 podman[234504]: 2026-01-27 15:05:41.202068835 +0000 UTC m=+0.090775962 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 15:05:41 compute-0 sudo[234669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zffesslwupucxmawhkrvubimbzltoakr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526341.387439-37-180316718708126/AnsiballZ_setup.py'
Jan 27 15:05:41 compute-0 sudo[234669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:42 compute-0 python3.9[234671]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 15:05:42 compute-0 sudo[234669]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:43 compute-0 sudo[234767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otowxlzpzlzcpwzaoxiwrkwlqblgdper ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526341.387439-37-180316718708126/AnsiballZ_dnf.py'
Jan 27 15:05:43 compute-0 sudo[234767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:43 compute-0 podman[234727]: 2026-01-27 15:05:43.214979121 +0000 UTC m=+0.101158692 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Jan 27 15:05:43 compute-0 python3.9[234772]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 15:05:44 compute-0 podman[234776]: 2026-01-27 15:05:44.399275619 +0000 UTC m=+0.147979020 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 27 15:05:45 compute-0 podman[234802]: 2026-01-27 15:05:45.370447626 +0000 UTC m=+0.127825749 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41)
Jan 27 15:05:47 compute-0 sudo[234767]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:48 compute-0 sudo[234975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqmymzqjxywegbjavvpbzakdmhhxrwgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526347.6207108-49-38413332880410/AnsiballZ_stat.py'
Jan 27 15:05:48 compute-0 sudo[234975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:48 compute-0 python3.9[234977]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:05:48 compute-0 sudo[234975]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:48 compute-0 sudo[235098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovdvduhvpkyiwhxqfdmeqflmcpqipceb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526347.6207108-49-38413332880410/AnsiballZ_copy.py'
Jan 27 15:05:48 compute-0 sudo[235098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:49 compute-0 python3.9[235100]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769526347.6207108-49-38413332880410/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:49 compute-0 sudo[235098]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:50 compute-0 sudo[235251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbysjyuuivjtzbplgokuwrsnxpqbtdiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526349.4963071-64-144983705063102/AnsiballZ_file.py'
Jan 27 15:05:50 compute-0 sudo[235251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:50 compute-0 python3.9[235253]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:50 compute-0 sudo[235251]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:51 compute-0 sudo[235403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rafvhqpfbxflvxgoballopvudzktjzcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526350.6266038-72-250987213917676/AnsiballZ_stat.py'
Jan 27 15:05:51 compute-0 sudo[235403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:51 compute-0 python3.9[235405]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 15:05:51 compute-0 sudo[235403]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:52 compute-0 sudo[235526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txlzgcrfzhqumlomoqbxymfjiqlaisgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526350.6266038-72-250987213917676/AnsiballZ_copy.py'
Jan 27 15:05:52 compute-0 sudo[235526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:52 compute-0 python3.9[235528]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769526350.6266038-72-250987213917676/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 15:05:52 compute-0 sudo[235526]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:53 compute-0 sudo[235693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jksxvheajnwhmkhiudxvibntijbcoffs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769526352.6028342-87-173276695721768/AnsiballZ_systemd.py'
Jan 27 15:05:53 compute-0 sudo[235693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:05:53 compute-0 podman[235652]: 2026-01-27 15:05:53.201042044 +0000 UTC m=+0.150201100 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:05:53 compute-0 python3.9[235698]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 15:05:53 compute-0 systemd[1]: Stopping System Logging Service...
Jan 27 15:05:53 compute-0 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] exiting on signal 15.
Jan 27 15:05:53 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Jan 27 15:05:53 compute-0 systemd[1]: Stopped System Logging Service.
Jan 27 15:05:53 compute-0 systemd[1]: rsyslog.service: Consumed 4.073s CPU time, 9.6M memory peak, read 0B from disk, written 6.1M to disk.
Jan 27 15:05:53 compute-0 systemd[1]: Starting System Logging Service...
Jan 27 15:05:54 compute-0 rsyslogd[235702]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="235702" x-info="https://www.rsyslog.com"] start
Jan 27 15:05:54 compute-0 systemd[1]: Started System Logging Service.
Jan 27 15:05:54 compute-0 rsyslogd[235702]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 15:05:54 compute-0 rsyslogd[235702]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Jan 27 15:05:54 compute-0 rsyslogd[235702]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Jan 27 15:05:54 compute-0 rsyslogd[235702]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Jan 27 15:05:54 compute-0 sudo[235693]: pam_unix(sudo:session): session closed for user root
Jan 27 15:05:54 compute-0 rsyslogd[235702]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Jan 27 15:05:54 compute-0 sshd-session[234195]: Connection closed by 192.168.122.30 port 53000
Jan 27 15:05:54 compute-0 sshd-session[234192]: pam_unix(sshd:session): session closed for user zuul
Jan 27 15:05:54 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Jan 27 15:05:54 compute-0 systemd[1]: session-28.scope: Consumed 11.342s CPU time.
Jan 27 15:05:54 compute-0 systemd-logind[820]: Session 28 logged out. Waiting for processes to exit.
Jan 27 15:05:54 compute-0 systemd-logind[820]: Removed session 28.
Jan 27 15:05:56 compute-0 podman[235731]: 2026-01-27 15:05:56.321603453 +0000 UTC m=+0.079660284 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 27 15:05:56 compute-0 podman[235732]: 2026-01-27 15:05:56.343856361 +0000 UTC m=+0.084540444 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:05:59 compute-0 podman[201073]: time="2026-01-27T15:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:05:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:05:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3873 "" "Go-http-client/1.1"
Jan 27 15:06:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:06:00.216 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:06:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:06:00.216 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:06:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:06:00.216 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:06:01 compute-0 openstack_network_exporter[204239]: ERROR   15:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:06:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:06:01 compute-0 openstack_network_exporter[204239]: ERROR   15:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:06:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:06:03 compute-0 podman[235774]: 2026-01-27 15:06:03.327911494 +0000 UTC m=+0.073707233 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:06:12 compute-0 podman[235798]: 2026-01-27 15:06:12.342960432 +0000 UTC m=+0.094970904 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 15:06:13 compute-0 nova_compute[185191]: 2026-01-27 15:06:13.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:06:13 compute-0 nova_compute[185191]: 2026-01-27 15:06:13.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:06:13 compute-0 nova_compute[185191]: 2026-01-27 15:06:13.986 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:06:13 compute-0 nova_compute[185191]: 2026-01-27 15:06:13.986 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:06:13 compute-0 nova_compute[185191]: 2026-01-27 15:06:13.987 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:06:13 compute-0 nova_compute[185191]: 2026-01-27 15:06:13.987 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:06:14 compute-0 podman[235817]: 2026-01-27 15:06:14.320272396 +0000 UTC m=+0.075818529 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 27 15:06:14 compute-0 nova_compute[185191]: 2026-01-27 15:06:14.333 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:06:14 compute-0 nova_compute[185191]: 2026-01-27 15:06:14.334 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5736MB free_disk=72.47618103027344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:06:14 compute-0 nova_compute[185191]: 2026-01-27 15:06:14.335 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:06:14 compute-0 nova_compute[185191]: 2026-01-27 15:06:14.335 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:06:14 compute-0 nova_compute[185191]: 2026-01-27 15:06:14.424 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:06:14 compute-0 nova_compute[185191]: 2026-01-27 15:06:14.424 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:06:14 compute-0 nova_compute[185191]: 2026-01-27 15:06:14.448 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:06:14 compute-0 nova_compute[185191]: 2026-01-27 15:06:14.466 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:06:14 compute-0 nova_compute[185191]: 2026-01-27 15:06:14.469 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:06:14 compute-0 nova_compute[185191]: 2026-01-27 15:06:14.470 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:06:14 compute-0 podman[235837]: 2026-01-27 15:06:14.802190366 +0000 UTC m=+0.126959215 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 15:06:15 compute-0 nova_compute[185191]: 2026-01-27 15:06:15.470 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:06:15 compute-0 nova_compute[185191]: 2026-01-27 15:06:15.470 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:06:15 compute-0 nova_compute[185191]: 2026-01-27 15:06:15.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:06:15 compute-0 nova_compute[185191]: 2026-01-27 15:06:15.942 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:06:15 compute-0 nova_compute[185191]: 2026-01-27 15:06:15.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:06:16 compute-0 podman[235863]: 2026-01-27 15:06:16.365565098 +0000 UTC m=+0.110943235 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 15:06:16 compute-0 nova_compute[185191]: 2026-01-27 15:06:16.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:06:16 compute-0 nova_compute[185191]: 2026-01-27 15:06:16.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:06:16 compute-0 nova_compute[185191]: 2026-01-27 15:06:16.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:06:16 compute-0 nova_compute[185191]: 2026-01-27 15:06:16.959 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:06:16 compute-0 nova_compute[185191]: 2026-01-27 15:06:16.960 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:06:17 compute-0 nova_compute[185191]: 2026-01-27 15:06:17.956 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:06:20 compute-0 nova_compute[185191]: 2026-01-27 15:06:20.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:06:21 compute-0 sshd-session[235882]: Connection closed by 2.57.122.238 port 60432
Jan 27 15:06:23 compute-0 podman[235883]: 2026-01-27 15:06:23.335888262 +0000 UTC m=+0.085823379 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 15:06:27 compute-0 podman[235902]: 2026-01-27 15:06:27.321115382 +0000 UTC m=+0.076145078 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=kepler, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 27 15:06:27 compute-0 podman[235903]: 2026-01-27 15:06:27.332799867 +0000 UTC m=+0.086227890 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:06:29 compute-0 podman[201073]: time="2026-01-27T15:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:06:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:06:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3872 "" "Go-http-client/1.1"
Jan 27 15:06:31 compute-0 openstack_network_exporter[204239]: ERROR   15:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:06:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:06:31 compute-0 openstack_network_exporter[204239]: ERROR   15:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:06:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:06:34 compute-0 podman[235945]: 2026-01-27 15:06:34.291950901 +0000 UTC m=+0.053764967 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:06:39 compute-0 sshd-session[235969]: Accepted publickey for zuul from 38.129.56.249 port 48158 ssh2: RSA SHA256:hk2zKQl968MLJIxLeRmYoL19KGDGKglTIr8JoOEMMCU
Jan 27 15:06:39 compute-0 systemd-logind[820]: New session 29 of user zuul.
Jan 27 15:06:39 compute-0 systemd[1]: Started Session 29 of User zuul.
Jan 27 15:06:39 compute-0 sshd-session[235969]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 15:06:40 compute-0 python3[236146]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 15:06:42 compute-0 sudo[236367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbxxurxtudxiiimigvnadntwjqwvujbf ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769526401.5961745-36887-168152184521026/AnsiballZ_command.py'
Jan 27 15:06:42 compute-0 sudo[236367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:06:42 compute-0 python3[236369]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:06:42 compute-0 sudo[236367]: pam_unix(sudo:session): session closed for user root
Jan 27 15:06:43 compute-0 sudo[236533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpvzrwiezofwkdvcfjzocjwyvevfftpv ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769526402.6894467-36898-138542045090880/AnsiballZ_command.py'
Jan 27 15:06:43 compute-0 podman[236495]: 2026-01-27 15:06:43.068995522 +0000 UTC m=+0.067193538 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 15:06:43 compute-0 sudo[236533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:06:43 compute-0 python3[236539]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:06:44 compute-0 sudo[236533]: pam_unix(sudo:session): session closed for user root
Jan 27 15:06:44 compute-0 podman[236542]: 2026-01-27 15:06:44.768474094 +0000 UTC m=+0.092591651 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 15:06:45 compute-0 podman[236604]: 2026-01-27 15:06:45.375875898 +0000 UTC m=+0.130266854 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_controller)
Jan 27 15:06:45 compute-0 python3[236735]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 15:06:46 compute-0 podman[236861]: 2026-01-27 15:06:46.790413589 +0000 UTC m=+0.082831776 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 27 15:06:46 compute-0 sudo[236901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kurwynmpsxogmkyorgvzxqrpjsefcrwy ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769526406.3917048-36944-79673640705028/AnsiballZ_setup.py'
Jan 27 15:06:46 compute-0 sudo[236901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:06:47 compute-0 python3[236907]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 15:06:48 compute-0 sudo[236901]: pam_unix(sudo:session): session closed for user root
Jan 27 15:06:49 compute-0 sudo[237130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgfjnkmobhljozghfmnwdbbhacofrhfd ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769526408.7184577-36975-25057751972881/AnsiballZ_command.py'
Jan 27 15:06:49 compute-0 sudo[237130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:06:49 compute-0 python3[237132]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:06:49 compute-0 sudo[237130]: pam_unix(sudo:session): session closed for user root
Jan 27 15:06:52 compute-0 sudo[237295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zybhcimnocconppnmctghtvfntqevzvu ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769526411.8788908-36992-203975557219447/AnsiballZ_command.py'
Jan 27 15:06:52 compute-0 sudo[237295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:06:52 compute-0 python3[237297]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:06:52 compute-0 sudo[237295]: pam_unix(sudo:session): session closed for user root
Jan 27 15:06:54 compute-0 podman[237337]: 2026-01-27 15:06:54.385513219 +0000 UTC m=+0.137484334 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 15:06:58 compute-0 podman[237357]: 2026-01-27 15:06:58.320860023 +0000 UTC m=+0.076252826 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:06:58 compute-0 podman[237356]: 2026-01-27 15:06:58.34093566 +0000 UTC m=+0.100913901 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler)
Jan 27 15:06:59 compute-0 podman[201073]: time="2026-01-27T15:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:06:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:06:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3877 "" "Go-http-client/1.1"
Jan 27 15:07:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:07:00.216 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:07:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:07:00.217 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:07:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:07:00.217 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:07:01 compute-0 openstack_network_exporter[204239]: ERROR   15:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:07:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:07:01 compute-0 openstack_network_exporter[204239]: ERROR   15:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:07:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:07:05 compute-0 podman[237398]: 2026-01-27 15:07:05.315055606 +0000 UTC m=+0.067586963 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.981 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.983 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:10.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:07:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:07:13 compute-0 podman[237423]: 2026-01-27 15:07:13.299154307 +0000 UTC m=+0.057701488 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:07:13 compute-0 nova_compute[185191]: 2026-01-27 15:07:13.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:07:13 compute-0 nova_compute[185191]: 2026-01-27 15:07:13.996 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:07:13 compute-0 nova_compute[185191]: 2026-01-27 15:07:13.997 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:07:13 compute-0 nova_compute[185191]: 2026-01-27 15:07:13.997 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:07:13 compute-0 nova_compute[185191]: 2026-01-27 15:07:13.997 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:07:14 compute-0 nova_compute[185191]: 2026-01-27 15:07:14.300 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:07:14 compute-0 nova_compute[185191]: 2026-01-27 15:07:14.301 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5736MB free_disk=72.47825622558594GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:07:14 compute-0 nova_compute[185191]: 2026-01-27 15:07:14.301 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:07:14 compute-0 nova_compute[185191]: 2026-01-27 15:07:14.301 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:07:14 compute-0 nova_compute[185191]: 2026-01-27 15:07:14.378 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:07:14 compute-0 nova_compute[185191]: 2026-01-27 15:07:14.379 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:07:14 compute-0 nova_compute[185191]: 2026-01-27 15:07:14.400 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:07:14 compute-0 nova_compute[185191]: 2026-01-27 15:07:14.418 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:07:14 compute-0 nova_compute[185191]: 2026-01-27 15:07:14.419 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:07:14 compute-0 nova_compute[185191]: 2026-01-27 15:07:14.420 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:07:15 compute-0 podman[237442]: 2026-01-27 15:07:15.316318853 +0000 UTC m=+0.076900512 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 15:07:16 compute-0 podman[237462]: 2026-01-27 15:07:16.346632592 +0000 UTC m=+0.102727938 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 15:07:16 compute-0 nova_compute[185191]: 2026-01-27 15:07:16.415 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:07:16 compute-0 nova_compute[185191]: 2026-01-27 15:07:16.415 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:07:16 compute-0 nova_compute[185191]: 2026-01-27 15:07:16.416 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:07:16 compute-0 nova_compute[185191]: 2026-01-27 15:07:16.416 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:07:16 compute-0 nova_compute[185191]: 2026-01-27 15:07:16.416 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:07:16 compute-0 nova_compute[185191]: 2026-01-27 15:07:16.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:07:16 compute-0 nova_compute[185191]: 2026-01-27 15:07:16.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:07:16 compute-0 nova_compute[185191]: 2026-01-27 15:07:16.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:07:16 compute-0 nova_compute[185191]: 2026-01-27 15:07:16.975 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:07:16 compute-0 nova_compute[185191]: 2026-01-27 15:07:16.975 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:07:17 compute-0 podman[237488]: 2026-01-27 15:07:17.328290477 +0000 UTC m=+0.083176724 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, release=1755695350, container_name=openstack_network_exporter)
Jan 27 15:07:17 compute-0 nova_compute[185191]: 2026-01-27 15:07:17.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:07:21 compute-0 nova_compute[185191]: 2026-01-27 15:07:21.949 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:07:25 compute-0 podman[237510]: 2026-01-27 15:07:25.303640792 +0000 UTC m=+0.064850842 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 15:07:29 compute-0 podman[237531]: 2026-01-27 15:07:29.301474207 +0000 UTC m=+0.058505539 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:07:29 compute-0 podman[237530]: 2026-01-27 15:07:29.308861507 +0000 UTC m=+0.070433306 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, distribution-scope=public, io.openshift.tags=base rhel9, architecture=x86_64, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1214.1726694543, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Jan 27 15:07:29 compute-0 podman[201073]: time="2026-01-27T15:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:07:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:07:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3877 "" "Go-http-client/1.1"
Jan 27 15:07:31 compute-0 openstack_network_exporter[204239]: ERROR   15:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:07:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:07:31 compute-0 openstack_network_exporter[204239]: ERROR   15:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:07:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:07:36 compute-0 podman[237572]: 2026-01-27 15:07:36.31313067 +0000 UTC m=+0.067012728 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:07:44 compute-0 podman[237594]: 2026-01-27 15:07:44.360836659 +0000 UTC m=+0.117666183 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 27 15:07:46 compute-0 podman[237613]: 2026-01-27 15:07:46.324347564 +0000 UTC m=+0.079212342 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 15:07:47 compute-0 podman[237632]: 2026-01-27 15:07:47.374462863 +0000 UTC m=+0.121655115 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 27 15:07:47 compute-0 podman[237658]: 2026-01-27 15:07:47.461058584 +0000 UTC m=+0.065448488 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., release=1755695350, version=9.6, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 27 15:07:52 compute-0 sshd-session[235972]: Received disconnect from 38.129.56.249 port 48158:11: disconnected by user
Jan 27 15:07:52 compute-0 sshd-session[235972]: Disconnected from user zuul 38.129.56.249 port 48158
Jan 27 15:07:52 compute-0 sshd-session[235969]: pam_unix(sshd:session): session closed for user zuul
Jan 27 15:07:52 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Jan 27 15:07:52 compute-0 systemd[1]: session-29.scope: Consumed 8.655s CPU time.
Jan 27 15:07:52 compute-0 systemd-logind[820]: Session 29 logged out. Waiting for processes to exit.
Jan 27 15:07:52 compute-0 systemd-logind[820]: Removed session 29.
Jan 27 15:07:56 compute-0 podman[237679]: 2026-01-27 15:07:56.349871209 +0000 UTC m=+0.093629311 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 15:07:59 compute-0 podman[201073]: time="2026-01-27T15:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:07:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:07:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3869 "" "Go-http-client/1.1"
Jan 27 15:08:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:08:00.218 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:08:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:08:00.218 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:08:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:08:00.218 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:08:00 compute-0 podman[237698]: 2026-01-27 15:08:00.321308949 +0000 UTC m=+0.075135199 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, release=1214.1726694543, com.redhat.component=ubi9-container, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0)
Jan 27 15:08:00 compute-0 podman[237699]: 2026-01-27 15:08:00.352066774 +0000 UTC m=+0.100775474 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:08:01 compute-0 openstack_network_exporter[204239]: ERROR   15:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:08:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:08:01 compute-0 openstack_network_exporter[204239]: ERROR   15:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:08:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:08:07 compute-0 podman[237739]: 2026-01-27 15:08:07.295612975 +0000 UTC m=+0.053113996 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:08:14 compute-0 podman[237761]: 2026-01-27 15:08:14.784426546 +0000 UTC m=+0.103285438 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 15:08:15 compute-0 nova_compute[185191]: 2026-01-27 15:08:15.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:08:15 compute-0 nova_compute[185191]: 2026-01-27 15:08:15.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:08:15 compute-0 nova_compute[185191]: 2026-01-27 15:08:15.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:08:15 compute-0 nova_compute[185191]: 2026-01-27 15:08:15.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:08:15 compute-0 nova_compute[185191]: 2026-01-27 15:08:15.976 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:08:15 compute-0 nova_compute[185191]: 2026-01-27 15:08:15.977 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:08:15 compute-0 nova_compute[185191]: 2026-01-27 15:08:15.977 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:08:15 compute-0 nova_compute[185191]: 2026-01-27 15:08:15.977 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:08:16 compute-0 nova_compute[185191]: 2026-01-27 15:08:16.280 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:08:16 compute-0 nova_compute[185191]: 2026-01-27 15:08:16.282 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5727MB free_disk=72.47811508178711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:08:16 compute-0 nova_compute[185191]: 2026-01-27 15:08:16.282 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:08:16 compute-0 nova_compute[185191]: 2026-01-27 15:08:16.282 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:08:16 compute-0 nova_compute[185191]: 2026-01-27 15:08:16.349 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:08:16 compute-0 nova_compute[185191]: 2026-01-27 15:08:16.349 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:08:16 compute-0 nova_compute[185191]: 2026-01-27 15:08:16.372 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:08:16 compute-0 nova_compute[185191]: 2026-01-27 15:08:16.394 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:08:16 compute-0 nova_compute[185191]: 2026-01-27 15:08:16.396 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:08:16 compute-0 nova_compute[185191]: 2026-01-27 15:08:16.396 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:08:17 compute-0 podman[237780]: 2026-01-27 15:08:17.356295832 +0000 UTC m=+0.106207061 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260126, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 27 15:08:17 compute-0 nova_compute[185191]: 2026-01-27 15:08:17.396 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:08:17 compute-0 nova_compute[185191]: 2026-01-27 15:08:17.397 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:08:17 compute-0 nova_compute[185191]: 2026-01-27 15:08:17.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:08:17 compute-0 nova_compute[185191]: 2026-01-27 15:08:17.942 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:08:17 compute-0 nova_compute[185191]: 2026-01-27 15:08:17.943 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:08:17 compute-0 nova_compute[185191]: 2026-01-27 15:08:17.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:08:17 compute-0 nova_compute[185191]: 2026-01-27 15:08:17.968 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:08:18 compute-0 podman[237800]: 2026-01-27 15:08:18.321290374 +0000 UTC m=+0.069128726 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, version=9.6, release=1755695350, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc.)
Jan 27 15:08:18 compute-0 podman[237799]: 2026-01-27 15:08:18.361080189 +0000 UTC m=+0.113876167 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 15:08:18 compute-0 nova_compute[185191]: 2026-01-27 15:08:18.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:08:19 compute-0 nova_compute[185191]: 2026-01-27 15:08:19.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:08:23 compute-0 nova_compute[185191]: 2026-01-27 15:08:23.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:08:27 compute-0 podman[237845]: 2026-01-27 15:08:27.308144521 +0000 UTC m=+0.063708277 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:08:29 compute-0 podman[201073]: time="2026-01-27T15:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:08:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:08:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3880 "" "Go-http-client/1.1"
Jan 27 15:08:31 compute-0 podman[237866]: 2026-01-27 15:08:31.308318306 +0000 UTC m=+0.064458837 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:08:31 compute-0 podman[237865]: 2026-01-27 15:08:31.309122576 +0000 UTC m=+0.069676649 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., config_id=kepler, distribution-scope=public, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.4, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=)
Jan 27 15:08:31 compute-0 openstack_network_exporter[204239]: ERROR   15:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:08:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:08:31 compute-0 openstack_network_exporter[204239]: ERROR   15:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:08:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:08:38 compute-0 podman[237904]: 2026-01-27 15:08:38.304292505 +0000 UTC m=+0.064231200 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:08:38 compute-0 sshd-session[237927]: Invalid user sol from 2.57.122.238 port 42994
Jan 27 15:08:39 compute-0 sshd-session[237927]: Connection closed by invalid user sol 2.57.122.238 port 42994 [preauth]
Jan 27 15:08:45 compute-0 podman[237929]: 2026-01-27 15:08:45.304608436 +0000 UTC m=+0.057763275 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:08:48 compute-0 podman[237946]: 2026-01-27 15:08:48.318026393 +0000 UTC m=+0.075285943 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 27 15:08:49 compute-0 podman[237966]: 2026-01-27 15:08:49.332065626 +0000 UTC m=+0.078292610 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.33.7, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Jan 27 15:08:49 compute-0 podman[237965]: 2026-01-27 15:08:49.365227092 +0000 UTC m=+0.117299525 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 15:08:58 compute-0 podman[238012]: 2026-01-27 15:08:58.312557777 +0000 UTC m=+0.073844676 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 15:08:58 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:08:58.482 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:08:58 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:08:58.483 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:08:58 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:08:58.485 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:08:59 compute-0 podman[201073]: time="2026-01-27T15:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:08:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:08:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3879 "" "Go-http-client/1.1"
Jan 27 15:09:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:09:00.218 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:09:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:09:00.218 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:09:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:09:00.219 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:09:01 compute-0 openstack_network_exporter[204239]: ERROR   15:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:09:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:09:01 compute-0 openstack_network_exporter[204239]: ERROR   15:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:09:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:09:02 compute-0 podman[238034]: 2026-01-27 15:09:02.299524563 +0000 UTC m=+0.058064815 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:09:02 compute-0 podman[238033]: 2026-01-27 15:09:02.300035026 +0000 UTC m=+0.060703025 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, name=ubi9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.openshift.tags=base rhel9, release=1214.1726694543, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-type=git, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 27 15:09:09 compute-0 podman[238073]: 2026-01-27 15:09:09.32781811 +0000 UTC m=+0.074608525 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.982 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.983 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:09:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:10.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:09:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:09:15 compute-0 nova_compute[185191]: 2026-01-27 15:09:15.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:09:15 compute-0 nova_compute[185191]: 2026-01-27 15:09:15.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:09:15 compute-0 nova_compute[185191]: 2026-01-27 15:09:15.943 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:09:15 compute-0 nova_compute[185191]: 2026-01-27 15:09:15.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:09:16 compute-0 nova_compute[185191]: 2026-01-27 15:09:16.071 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:09:16 compute-0 nova_compute[185191]: 2026-01-27 15:09:16.071 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:09:16 compute-0 nova_compute[185191]: 2026-01-27 15:09:16.071 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:09:16 compute-0 nova_compute[185191]: 2026-01-27 15:09:16.071 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:09:16 compute-0 podman[238097]: 2026-01-27 15:09:16.326906672 +0000 UTC m=+0.081621703 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 27 15:09:16 compute-0 nova_compute[185191]: 2026-01-27 15:09:16.367 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:09:16 compute-0 nova_compute[185191]: 2026-01-27 15:09:16.368 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5714MB free_disk=72.47809219360352GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:09:16 compute-0 nova_compute[185191]: 2026-01-27 15:09:16.368 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:09:16 compute-0 nova_compute[185191]: 2026-01-27 15:09:16.369 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:09:16 compute-0 nova_compute[185191]: 2026-01-27 15:09:16.449 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:09:16 compute-0 nova_compute[185191]: 2026-01-27 15:09:16.450 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:09:16 compute-0 nova_compute[185191]: 2026-01-27 15:09:16.481 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:09:16 compute-0 nova_compute[185191]: 2026-01-27 15:09:16.508 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:09:16 compute-0 nova_compute[185191]: 2026-01-27 15:09:16.510 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:09:16 compute-0 nova_compute[185191]: 2026-01-27 15:09:16.510 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:09:17 compute-0 nova_compute[185191]: 2026-01-27 15:09:17.515 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:09:17 compute-0 nova_compute[185191]: 2026-01-27 15:09:17.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:09:17 compute-0 nova_compute[185191]: 2026-01-27 15:09:17.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:09:17 compute-0 nova_compute[185191]: 2026-01-27 15:09:17.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:09:17 compute-0 nova_compute[185191]: 2026-01-27 15:09:17.991 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:09:18 compute-0 nova_compute[185191]: 2026-01-27 15:09:18.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:09:18 compute-0 nova_compute[185191]: 2026-01-27 15:09:18.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:09:19 compute-0 podman[238114]: 2026-01-27 15:09:19.316044355 +0000 UTC m=+0.073599999 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 15:09:19 compute-0 nova_compute[185191]: 2026-01-27 15:09:19.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:09:20 compute-0 podman[238135]: 2026-01-27 15:09:20.324717857 +0000 UTC m=+0.079985639 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=openstack_network_exporter, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, version=9.6, distribution-scope=public, name=ubi9-minimal)
Jan 27 15:09:20 compute-0 podman[238134]: 2026-01-27 15:09:20.342057968 +0000 UTC m=+0.104571823 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 15:09:24 compute-0 nova_compute[185191]: 2026-01-27 15:09:24.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:09:29 compute-0 podman[238182]: 2026-01-27 15:09:29.348061364 +0000 UTC m=+0.092901222 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 15:09:29 compute-0 podman[201073]: time="2026-01-27T15:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:09:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:09:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3879 "" "Go-http-client/1.1"
Jan 27 15:09:31 compute-0 openstack_network_exporter[204239]: ERROR   15:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:09:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:09:31 compute-0 openstack_network_exporter[204239]: ERROR   15:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:09:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:09:33 compute-0 podman[238201]: 2026-01-27 15:09:33.317763893 +0000 UTC m=+0.066188352 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:09:33 compute-0 podman[238200]: 2026-01-27 15:09:33.320553757 +0000 UTC m=+0.078123569 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, version=9.4, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release-0.7.12=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Jan 27 15:09:40 compute-0 podman[238240]: 2026-01-27 15:09:40.304404343 +0000 UTC m=+0.065394321 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:09:47 compute-0 podman[238263]: 2026-01-27 15:09:47.307347508 +0000 UTC m=+0.058519478 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:09:50 compute-0 podman[238283]: 2026-01-27 15:09:50.308317426 +0000 UTC m=+0.066295504 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 27 15:09:51 compute-0 podman[238302]: 2026-01-27 15:09:51.339176267 +0000 UTC m=+0.073660090 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 27 15:09:51 compute-0 podman[238301]: 2026-01-27 15:09:51.390345119 +0000 UTC m=+0.125604453 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 15:09:59 compute-0 podman[201073]: time="2026-01-27T15:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:09:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:09:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3878 "" "Go-http-client/1.1"
Jan 27 15:10:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:00.218 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:00.219 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:00.219 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:00 compute-0 podman[238345]: 2026-01-27 15:10:00.348501671 +0000 UTC m=+0.097245378 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 15:10:01 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:01.321 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:10:01 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:01.322 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:10:01 compute-0 openstack_network_exporter[204239]: ERROR   15:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:10:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:10:01 compute-0 openstack_network_exporter[204239]: ERROR   15:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:10:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:10:04 compute-0 podman[238366]: 2026-01-27 15:10:04.320007446 +0000 UTC m=+0.073142116 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-type=git, release-0.7.12=, distribution-scope=public, vendor=Red Hat, Inc., config_id=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, io.buildah.version=1.29.0, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Jan 27 15:10:04 compute-0 podman[238367]: 2026-01-27 15:10:04.331260176 +0000 UTC m=+0.084226091 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:10:07 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:07.325 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:10:11 compute-0 podman[238406]: 2026-01-27 15:10:11.298068978 +0000 UTC m=+0.058573789 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:10:12 compute-0 nova_compute[185191]: 2026-01-27 15:10:12.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:10:12 compute-0 nova_compute[185191]: 2026-01-27 15:10:12.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 15:10:13 compute-0 nova_compute[185191]: 2026-01-27 15:10:13.000 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.048 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.048 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.082 185195 DEBUG nova.compute.manager [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.299 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.299 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.310 185195 DEBUG nova.virt.hardware [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.310 185195 INFO nova.compute.claims [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Claim successful on node compute-0.ctlplane.example.com
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.501 185195 DEBUG nova.scheduler.client.report [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Refreshing inventories for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.556 185195 DEBUG nova.scheduler.client.report [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Updating ProviderTree inventory for provider dbf037fd-3291-487b-ae9c-69178dae2ebc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.557 185195 DEBUG nova.compute.provider_tree [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.577 185195 DEBUG nova.scheduler.client.report [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Refreshing aggregate associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.606 185195 DEBUG nova.scheduler.client.report [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Refreshing trait associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, traits: HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_ACCELERATORS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.656 185195 DEBUG nova.compute.provider_tree [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.686 185195 DEBUG nova.scheduler.client.report [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.725 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.726 185195 DEBUG nova.compute.manager [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.802 185195 DEBUG nova.compute.manager [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.803 185195 DEBUG nova.network.neutron [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.844 185195 INFO nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 15:10:15 compute-0 nova_compute[185191]: 2026-01-27 15:10:15.996 185195 DEBUG nova.compute.manager [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 15:10:16 compute-0 nova_compute[185191]: 2026-01-27 15:10:16.215 185195 DEBUG nova.compute.manager [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 15:10:16 compute-0 nova_compute[185191]: 2026-01-27 15:10:16.216 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 15:10:16 compute-0 nova_compute[185191]: 2026-01-27 15:10:16.217 185195 INFO nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Creating image(s)
Jan 27 15:10:16 compute-0 nova_compute[185191]: 2026-01-27 15:10:16.218 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "/var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:16 compute-0 nova_compute[185191]: 2026-01-27 15:10:16.218 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:16 compute-0 nova_compute[185191]: 2026-01-27 15:10:16.219 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:16 compute-0 nova_compute[185191]: 2026-01-27 15:10:16.220 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:16 compute-0 nova_compute[185191]: 2026-01-27 15:10:16.220 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:16 compute-0 nova_compute[185191]: 2026-01-27 15:10:16.408 185195 WARNING oslo_policy.policy [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 27 15:10:16 compute-0 nova_compute[185191]: 2026-01-27 15:10:16.408 185195 WARNING oslo_policy.policy [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 27 15:10:17 compute-0 nova_compute[185191]: 2026-01-27 15:10:17.486 185195 DEBUG nova.network.neutron [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Successfully created port: 4c1725b6-637d-4572-927d-1137b3ba538c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 15:10:17 compute-0 nova_compute[185191]: 2026-01-27 15:10:17.792 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:10:17 compute-0 nova_compute[185191]: 2026-01-27 15:10:17.852 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9.part --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:10:17 compute-0 nova_compute[185191]: 2026-01-27 15:10:17.853 185195 DEBUG nova.virt.images [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] 2b336e4b-c98e-4b97-9f8f-b3290e6b6caf was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 27 15:10:17 compute-0 nova_compute[185191]: 2026-01-27 15:10:17.896 185195 DEBUG nova.privsep.utils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 27 15:10:17 compute-0 nova_compute[185191]: 2026-01-27 15:10:17.897 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9.part /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:10:18 compute-0 nova_compute[185191]: 2026-01-27 15:10:18.090 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:10:18 compute-0 nova_compute[185191]: 2026-01-27 15:10:18.091 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:10:18 compute-0 nova_compute[185191]: 2026-01-27 15:10:18.091 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:10:18 compute-0 nova_compute[185191]: 2026-01-27 15:10:18.092 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:10:18 compute-0 nova_compute[185191]: 2026-01-27 15:10:18.092 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:10:18 compute-0 nova_compute[185191]: 2026-01-27 15:10:18.133 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:18 compute-0 nova_compute[185191]: 2026-01-27 15:10:18.133 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:18 compute-0 nova_compute[185191]: 2026-01-27 15:10:18.134 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:18 compute-0 nova_compute[185191]: 2026-01-27 15:10:18.134 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:10:18 compute-0 podman[238442]: 2026-01-27 15:10:18.350487348 +0000 UTC m=+0.100498104 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 15:10:18 compute-0 nova_compute[185191]: 2026-01-27 15:10:18.519 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:10:18 compute-0 nova_compute[185191]: 2026-01-27 15:10:18.521 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5706MB free_disk=72.45779418945312GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:10:18 compute-0 nova_compute[185191]: 2026-01-27 15:10:18.521 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:18 compute-0 nova_compute[185191]: 2026-01-27 15:10:18.521 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:18 compute-0 nova_compute[185191]: 2026-01-27 15:10:18.994 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9.part /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9.converted" returned: 0 in 1.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:10:18 compute-0 nova_compute[185191]: 2026-01-27 15:10:18.998 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.037 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.037 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.038 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.065 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9.converted --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.066 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.079 185195 INFO oslo.privsep.daemon [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpgdo7gccb/privsep.sock']
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.100 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.182 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updated inventory for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.183 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.183 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.231 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.232 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.664 185195 DEBUG nova.network.neutron [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Successfully updated port: 4c1725b6-637d-4572-927d-1137b3ba538c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.699 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.700 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquired lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.700 185195 DEBUG nova.network.neutron [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.828 185195 INFO oslo.privsep.daemon [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Spawned new privsep daemon via rootwrap
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.682 238468 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.688 238468 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.690 238468 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.691 238468 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238468
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.925 185195 DEBUG nova.network.neutron [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.933 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.954 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.956 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.982 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.983 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:10:19 compute-0 nova_compute[185191]: 2026-01-27 15:10:19.983 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.009 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.010 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.011 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.012 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.013 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.014 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.030 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.051 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.052 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.083 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.116 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.117 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9,backing_fmt=raw /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.270 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9,backing_fmt=raw /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk 1073741824" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.271 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.272 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.298 185195 DEBUG nova.compute.manager [req-3f1bfcca-8168-4ef2-94d9-0f318459cb6f req-c6013c9a-c8ab-4a32-8812-1f0527ec6ba5 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Received event network-changed-4c1725b6-637d-4572-927d-1137b3ba538c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.299 185195 DEBUG nova.compute.manager [req-3f1bfcca-8168-4ef2-94d9-0f318459cb6f req-c6013c9a-c8ab-4a32-8812-1f0527ec6ba5 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Refreshing instance network info cache due to event network-changed-4c1725b6-637d-4572-927d-1137b3ba538c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.300 185195 DEBUG oslo_concurrency.lockutils [req-3f1bfcca-8168-4ef2-94d9-0f318459cb6f req-c6013c9a-c8ab-4a32-8812-1f0527ec6ba5 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.340 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.340 185195 DEBUG nova.virt.disk.api [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Checking if we can resize image /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.341 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.401 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.402 185195 DEBUG nova.virt.disk.api [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Cannot resize image /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.402 185195 DEBUG nova.objects.instance [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lazy-loading 'migration_context' on Instance uuid 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.428 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "/var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.429 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.430 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.430 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.431 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.431 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.462 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.463 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.539 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.541 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.553 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.616 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.618 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.619 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.629 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.699 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.701 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.774 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 1073741824" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.775 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.776 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.857 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.859 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.860 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Ensure instance console log exists: /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.860 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.861 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.861 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.893 185195 DEBUG nova.network.neutron [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updating instance_info_cache with network_info: [{"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.937 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Releasing lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.939 185195 DEBUG nova.compute.manager [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Instance network_info: |[{"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.940 185195 DEBUG oslo_concurrency.lockutils [req-3f1bfcca-8168-4ef2-94d9-0f318459cb6f req-c6013c9a-c8ab-4a32-8812-1f0527ec6ba5 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.940 185195 DEBUG nova.network.neutron [req-3f1bfcca-8168-4ef2-94d9-0f318459cb6f req-c6013c9a-c8ab-4a32-8812-1f0527ec6ba5 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Refreshing network info cache for port 4c1725b6-637d-4572-927d-1137b3ba538c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.944 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Start _get_guest_xml network_info=[{"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T15:08:48Z,direct_url=<?>,disk_format='qcow2',id=2b336e4b-c98e-4b97-9f8f-b3290e6b6caf,min_disk=0,min_ram=0,name='cirros',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T15:08:49Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}], 'ephemerals': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'size': 1, 'guest_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.953 185195 WARNING nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.962 185195 DEBUG nova.virt.libvirt.host [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.964 185195 DEBUG nova.virt.libvirt.host [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.975 185195 DEBUG nova.virt.libvirt.host [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.976 185195 DEBUG nova.virt.libvirt.host [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.977 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.978 185195 DEBUG nova.virt.hardware [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T15:08:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='26a24ace-a5af-47b3-9314-7d2b9e74c6b8',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T15:08:48Z,direct_url=<?>,disk_format='qcow2',id=2b336e4b-c98e-4b97-9f8f-b3290e6b6caf,min_disk=0,min_ram=0,name='cirros',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T15:08:49Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.979 185195 DEBUG nova.virt.hardware [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.979 185195 DEBUG nova.virt.hardware [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.979 185195 DEBUG nova.virt.hardware [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.980 185195 DEBUG nova.virt.hardware [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.980 185195 DEBUG nova.virt.hardware [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.981 185195 DEBUG nova.virt.hardware [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.981 185195 DEBUG nova.virt.hardware [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.982 185195 DEBUG nova.virt.hardware [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.983 185195 DEBUG nova.virt.hardware [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.983 185195 DEBUG nova.virt.hardware [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.987 185195 DEBUG nova.privsep.utils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.988 185195 DEBUG nova.virt.libvirt.vif [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:10:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd88ca4062da4fb9bedb3a0002a43c12',ramdisk_id='',reservation_id='r-e3iaxvta',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:10:16Z,user_data=None,user_id='24260fb24da44b10b598f9c822c026b8',uuid=8c4af6eb-340b-477f-83d2-11aa7ab0b9d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.989 185195 DEBUG nova.network.os_vif_util [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converting VIF {"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.990 185195 DEBUG nova.network.os_vif_util [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:6e:c4,bridge_name='br-int',has_traffic_filtering=True,id=4c1725b6-637d-4572-927d-1137b3ba538c,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c1725b6-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.992 185195 DEBUG nova.objects.instance [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:10:20 compute-0 nova_compute[185191]: 2026-01-27 15:10:20.993 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.014 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] End _get_guest_xml xml=<domain type="kvm">
Jan 27 15:10:21 compute-0 nova_compute[185191]:   <uuid>8c4af6eb-340b-477f-83d2-11aa7ab0b9d3</uuid>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   <name>instance-00000001</name>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   <memory>524288</memory>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   <vcpu>1</vcpu>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   <metadata>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <nova:name>test_0</nova:name>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <nova:creationTime>2026-01-27 15:10:20</nova:creationTime>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <nova:flavor name="m1.small">
Jan 27 15:10:21 compute-0 nova_compute[185191]:         <nova:memory>512</nova:memory>
Jan 27 15:10:21 compute-0 nova_compute[185191]:         <nova:disk>1</nova:disk>
Jan 27 15:10:21 compute-0 nova_compute[185191]:         <nova:swap>0</nova:swap>
Jan 27 15:10:21 compute-0 nova_compute[185191]:         <nova:ephemeral>1</nova:ephemeral>
Jan 27 15:10:21 compute-0 nova_compute[185191]:         <nova:vcpus>1</nova:vcpus>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       </nova:flavor>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <nova:owner>
Jan 27 15:10:21 compute-0 nova_compute[185191]:         <nova:user uuid="24260fb24da44b10b598f9c822c026b8">admin</nova:user>
Jan 27 15:10:21 compute-0 nova_compute[185191]:         <nova:project uuid="dd88ca4062da4fb9bedb3a0002a43c12">admin</nova:project>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       </nova:owner>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <nova:root type="image" uuid="2b336e4b-c98e-4b97-9f8f-b3290e6b6caf"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <nova:ports>
Jan 27 15:10:21 compute-0 nova_compute[185191]:         <nova:port uuid="4c1725b6-637d-4572-927d-1137b3ba538c">
Jan 27 15:10:21 compute-0 nova_compute[185191]:           <nova:ip type="fixed" address="192.168.0.180" ipVersion="4"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:         </nova:port>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       </nova:ports>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     </nova:instance>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   </metadata>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   <sysinfo type="smbios">
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <system>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <entry name="manufacturer">RDO</entry>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <entry name="product">OpenStack Compute</entry>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <entry name="serial">8c4af6eb-340b-477f-83d2-11aa7ab0b9d3</entry>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <entry name="uuid">8c4af6eb-340b-477f-83d2-11aa7ab0b9d3</entry>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <entry name="family">Virtual Machine</entry>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     </system>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   </sysinfo>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   <os>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <boot dev="hd"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <smbios mode="sysinfo"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   </os>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   <features>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <acpi/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <apic/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <vmcoreinfo/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   </features>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   <clock offset="utc">
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <timer name="hpet" present="no"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   </clock>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   <cpu mode="host-model" match="exact">
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <target dev="vda" bus="virtio"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <target dev="vdb" bus="virtio"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <disk type="file" device="cdrom">
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.config"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <target dev="sda" bus="sata"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <interface type="ethernet">
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <mac address="fa:16:3e:89:6e:c4"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <mtu size="1442"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <target dev="tap4c1725b6-63"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <serial type="pty">
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <log file="/var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/console.log" append="off"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     </serial>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <video>
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     </video>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <input type="tablet" bus="usb"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <rng model="virtio">
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <backend model="random">/dev/urandom</backend>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <controller type="usb" index="0"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     <memballoon model="virtio">
Jan 27 15:10:21 compute-0 nova_compute[185191]:       <stats period="10"/>
Jan 27 15:10:21 compute-0 nova_compute[185191]:     </memballoon>
Jan 27 15:10:21 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:10:21 compute-0 nova_compute[185191]: </domain>
Jan 27 15:10:21 compute-0 nova_compute[185191]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.016 185195 DEBUG nova.compute.manager [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Preparing to wait for external event network-vif-plugged-4c1725b6-637d-4572-927d-1137b3ba538c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.017 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.017 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.017 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.018 185195 DEBUG nova.virt.libvirt.vif [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:10:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd88ca4062da4fb9bedb3a0002a43c12',ramdisk_id='',reservation_id='r-e3iaxvta',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:10:16Z,user_data=None,user_id='24260fb24da44b10b598f9c822c026b8',uuid=8c4af6eb-340b-477f-83d2-11aa7ab0b9d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.019 185195 DEBUG nova.network.os_vif_util [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converting VIF {"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.020 185195 DEBUG nova.network.os_vif_util [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:6e:c4,bridge_name='br-int',has_traffic_filtering=True,id=4c1725b6-637d-4572-927d-1137b3ba538c,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c1725b6-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.020 185195 DEBUG os_vif [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:6e:c4,bridge_name='br-int',has_traffic_filtering=True,id=4c1725b6-637d-4572-927d-1137b3ba538c,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c1725b6-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.057 185195 DEBUG ovsdbapp.backend.ovs_idl [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.057 185195 DEBUG ovsdbapp.backend.ovs_idl [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.058 185195 DEBUG ovsdbapp.backend.ovs_idl [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.058 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.059 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.059 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.060 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.061 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.064 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.071 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.072 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.072 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.073 185195 INFO oslo.privsep.daemon [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpnbcc7_u8/privsep.sock']
Jan 27 15:10:21 compute-0 podman[238504]: 2026-01-27 15:10:21.340135586 +0000 UTC m=+0.090425136 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126)
Jan 27 15:10:21 compute-0 podman[238523]: 2026-01-27 15:10:21.459017838 +0000 UTC m=+0.079766283 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git)
Jan 27 15:10:21 compute-0 podman[238542]: 2026-01-27 15:10:21.602540296 +0000 UTC m=+0.115473753 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.925 185195 INFO oslo.privsep.daemon [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Spawned new privsep daemon via rootwrap
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.759 238568 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.762 238568 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.765 238568 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 27 15:10:21 compute-0 nova_compute[185191]: 2026-01-27 15:10:21.765 238568 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238568
Jan 27 15:10:22 compute-0 nova_compute[185191]: 2026-01-27 15:10:22.182 185195 DEBUG nova.network.neutron [req-3f1bfcca-8168-4ef2-94d9-0f318459cb6f req-c6013c9a-c8ab-4a32-8812-1f0527ec6ba5 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updated VIF entry in instance network info cache for port 4c1725b6-637d-4572-927d-1137b3ba538c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:10:22 compute-0 nova_compute[185191]: 2026-01-27 15:10:22.183 185195 DEBUG nova.network.neutron [req-3f1bfcca-8168-4ef2-94d9-0f318459cb6f req-c6013c9a-c8ab-4a32-8812-1f0527ec6ba5 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updating instance_info_cache with network_info: [{"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:10:22 compute-0 nova_compute[185191]: 2026-01-27 15:10:22.222 185195 DEBUG oslo_concurrency.lockutils [req-3f1bfcca-8168-4ef2-94d9-0f318459cb6f req-c6013c9a-c8ab-4a32-8812-1f0527ec6ba5 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:10:22 compute-0 nova_compute[185191]: 2026-01-27 15:10:22.300 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:22 compute-0 nova_compute[185191]: 2026-01-27 15:10:22.301 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c1725b6-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:10:22 compute-0 nova_compute[185191]: 2026-01-27 15:10:22.301 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4c1725b6-63, col_values=(('external_ids', {'iface-id': '4c1725b6-637d-4572-927d-1137b3ba538c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:6e:c4', 'vm-uuid': '8c4af6eb-340b-477f-83d2-11aa7ab0b9d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:10:22 compute-0 nova_compute[185191]: 2026-01-27 15:10:22.304 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:22 compute-0 NetworkManager[56090]: <info>  [1769526622.3052] manager: (tap4c1725b6-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 27 15:10:22 compute-0 nova_compute[185191]: 2026-01-27 15:10:22.308 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:10:22 compute-0 nova_compute[185191]: 2026-01-27 15:10:22.312 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:22 compute-0 nova_compute[185191]: 2026-01-27 15:10:22.315 185195 INFO os_vif [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:6e:c4,bridge_name='br-int',has_traffic_filtering=True,id=4c1725b6-637d-4572-927d-1137b3ba538c,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c1725b6-63')
Jan 27 15:10:22 compute-0 nova_compute[185191]: 2026-01-27 15:10:22.524 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:10:22 compute-0 nova_compute[185191]: 2026-01-27 15:10:22.524 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:10:22 compute-0 nova_compute[185191]: 2026-01-27 15:10:22.524 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:10:22 compute-0 nova_compute[185191]: 2026-01-27 15:10:22.525 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No VIF found with MAC fa:16:3e:89:6e:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 15:10:22 compute-0 nova_compute[185191]: 2026-01-27 15:10:22.525 185195 INFO nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Using config drive
Jan 27 15:10:24 compute-0 nova_compute[185191]: 2026-01-27 15:10:24.430 185195 INFO nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Creating config drive at /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.config
Jan 27 15:10:24 compute-0 nova_compute[185191]: 2026-01-27 15:10:24.436 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5p8iqg97 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:10:24 compute-0 nova_compute[185191]: 2026-01-27 15:10:24.589 185195 DEBUG oslo_concurrency.processutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5p8iqg97" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:10:24 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 27 15:10:24 compute-0 kernel: tap4c1725b6-63: entered promiscuous mode
Jan 27 15:10:24 compute-0 NetworkManager[56090]: <info>  [1769526624.7152] manager: (tap4c1725b6-63): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Jan 27 15:10:24 compute-0 nova_compute[185191]: 2026-01-27 15:10:24.714 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:24 compute-0 ovn_controller[97541]: 2026-01-27T15:10:24Z|00027|binding|INFO|Claiming lport 4c1725b6-637d-4572-927d-1137b3ba538c for this chassis.
Jan 27 15:10:24 compute-0 ovn_controller[97541]: 2026-01-27T15:10:24Z|00028|binding|INFO|4c1725b6-637d-4572-927d-1137b3ba538c: Claiming fa:16:3e:89:6e:c4 192.168.0.180
Jan 27 15:10:24 compute-0 nova_compute[185191]: 2026-01-27 15:10:24.726 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:24 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:24.741 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:6e:c4 192.168.0.180'], port_security=['fa:16:3e:89:6e:c4 192.168.0.180'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.180/24', 'neutron:device_id': '8c4af6eb-340b-477f-83d2-11aa7ab0b9d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7e37fe5-6354-4f61-95d0-78632be96811', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'neutron:revision_number': '2', 'neutron:security_group_ids': '812ec3a5-800e-4a9a-a5c1-7429aedf7716', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=764c6ac9-6147-480d-b23c-048fbe883747, chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=4c1725b6-637d-4572-927d-1137b3ba538c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:10:24 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:24.743 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 4c1725b6-637d-4572-927d-1137b3ba538c in datapath d7e37fe5-6354-4f61-95d0-78632be96811 bound to our chassis
Jan 27 15:10:24 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:24.745 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7e37fe5-6354-4f61-95d0-78632be96811
Jan 27 15:10:24 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:24.747 106793 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmptilk7gal/privsep.sock']
Jan 27 15:10:24 compute-0 systemd-udevd[238596]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:10:24 compute-0 NetworkManager[56090]: <info>  [1769526624.7947] device (tap4c1725b6-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 15:10:24 compute-0 NetworkManager[56090]: <info>  [1769526624.7988] device (tap4c1725b6-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 15:10:24 compute-0 nova_compute[185191]: 2026-01-27 15:10:24.836 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:24 compute-0 ovn_controller[97541]: 2026-01-27T15:10:24Z|00029|binding|INFO|Setting lport 4c1725b6-637d-4572-927d-1137b3ba538c ovn-installed in OVS
Jan 27 15:10:24 compute-0 ovn_controller[97541]: 2026-01-27T15:10:24Z|00030|binding|INFO|Setting lport 4c1725b6-637d-4572-927d-1137b3ba538c up in Southbound
Jan 27 15:10:24 compute-0 nova_compute[185191]: 2026-01-27 15:10:24.848 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:24 compute-0 systemd-machined[156506]: New machine qemu-1-instance-00000001.
Jan 27 15:10:24 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 27 15:10:24 compute-0 nova_compute[185191]: 2026-01-27 15:10:24.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:10:25 compute-0 nova_compute[185191]: 2026-01-27 15:10:25.201 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:25 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 15:10:25 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 15:10:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:25.547 106793 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 27 15:10:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:25.549 106793 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmptilk7gal/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 27 15:10:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:25.398 238613 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 15:10:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:25.405 238613 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 15:10:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:25.409 238613 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 27 15:10:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:25.410 238613 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238613
Jan 27 15:10:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:25.552 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[431b7f15-102c-4533-8ec7-d18f51751968]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:26 compute-0 nova_compute[185191]: 2026-01-27 15:10:26.053 185195 DEBUG nova.compute.manager [req-75034d24-2277-4b83-8b86-fba096d4ebab req-a7a84d05-c0f2-492e-99ba-8148b765af77 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Received event network-vif-plugged-4c1725b6-637d-4572-927d-1137b3ba538c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:10:26 compute-0 nova_compute[185191]: 2026-01-27 15:10:26.054 185195 DEBUG oslo_concurrency.lockutils [req-75034d24-2277-4b83-8b86-fba096d4ebab req-a7a84d05-c0f2-492e-99ba-8148b765af77 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:26 compute-0 nova_compute[185191]: 2026-01-27 15:10:26.055 185195 DEBUG oslo_concurrency.lockutils [req-75034d24-2277-4b83-8b86-fba096d4ebab req-a7a84d05-c0f2-492e-99ba-8148b765af77 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:26 compute-0 nova_compute[185191]: 2026-01-27 15:10:26.055 185195 DEBUG oslo_concurrency.lockutils [req-75034d24-2277-4b83-8b86-fba096d4ebab req-a7a84d05-c0f2-492e-99ba-8148b765af77 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:26 compute-0 nova_compute[185191]: 2026-01-27 15:10:26.056 185195 DEBUG nova.compute.manager [req-75034d24-2277-4b83-8b86-fba096d4ebab req-a7a84d05-c0f2-492e-99ba-8148b765af77 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Processing event network-vif-plugged-4c1725b6-637d-4572-927d-1137b3ba538c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 15:10:26 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:26.060 238613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:26 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:26.061 238613 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:26 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:26.061 238613 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:26 compute-0 nova_compute[185191]: 2026-01-27 15:10:26.728 185195 DEBUG nova.compute.manager [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 15:10:26 compute-0 nova_compute[185191]: 2026-01-27 15:10:26.730 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769526626.7297275, 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:10:26 compute-0 nova_compute[185191]: 2026-01-27 15:10:26.730 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] VM Started (Lifecycle Event)
Jan 27 15:10:26 compute-0 nova_compute[185191]: 2026-01-27 15:10:26.753 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 15:10:26 compute-0 nova_compute[185191]: 2026-01-27 15:10:26.761 185195 INFO nova.virt.libvirt.driver [-] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Instance spawned successfully.
Jan 27 15:10:26 compute-0 nova_compute[185191]: 2026-01-27 15:10:26.761 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 15:10:26 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:26.837 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb55f7c-112c-4a92-9b0d-dfaf998d0a0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:26 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:26.839 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd7e37fe5-61 in ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 15:10:26 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:26.841 238613 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd7e37fe5-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 15:10:26 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:26.841 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ff9858b3-d86e-4000-976f-421478f622c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:26 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:26.845 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d294a3fd-277f-43c1-af7e-fdab1b5a1ad0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:26 compute-0 nova_compute[185191]: 2026-01-27 15:10:26.862 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:10:26 compute-0 nova_compute[185191]: 2026-01-27 15:10:26.868 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:10:26 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:26.875 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[6292ace1-26a1-4e01-9b51-4f39440e71e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:26 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:26.931 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ad203eec-5d6e-44ef-a167-94a2af1fdab6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:26 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:26.933 106793 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp84_ndbm3/privsep.sock']
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.009 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.010 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769526626.7298987, 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.010 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] VM Paused (Lifecycle Event)
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.049 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.050 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.050 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.050 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.051 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.051 185195 DEBUG nova.virt.libvirt.driver [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.073 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.077 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769526626.75241, 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.077 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] VM Resumed (Lifecycle Event)
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.228 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.234 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.255 185195 INFO nova.compute.manager [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Took 11.04 seconds to spawn the instance on the hypervisor.
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.256 185195 DEBUG nova.compute.manager [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.305 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.347 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.499 185195 INFO nova.compute.manager [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Took 12.31 seconds to build instance.
Jan 27 15:10:27 compute-0 nova_compute[185191]: 2026-01-27 15:10:27.545 185195 DEBUG oslo_concurrency.lockutils [None req-964293f1-db4a-4ae3-938f-aa1703da2c05 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:27.672 106793 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 27 15:10:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:27.673 106793 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp84_ndbm3/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 27 15:10:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:27.492 238652 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 15:10:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:27.496 238652 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 15:10:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:27.498 238652 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 27 15:10:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:27.498 238652 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238652
Jan 27 15:10:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:27.676 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1e6e9b-0c7a-4183-89fb-592b7eed4434]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:28 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:28.211 238652 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:28 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:28.211 238652 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:28 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:28.212 238652 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:28 compute-0 nova_compute[185191]: 2026-01-27 15:10:28.375 185195 DEBUG nova.compute.manager [req-8719dc96-0afd-4ea5-a5f3-e7c904cec9c7 req-e65b5ee9-1082-4ebd-8e65-8b15de2f1b8e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Received event network-vif-plugged-4c1725b6-637d-4572-927d-1137b3ba538c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:10:28 compute-0 nova_compute[185191]: 2026-01-27 15:10:28.377 185195 DEBUG oslo_concurrency.lockutils [req-8719dc96-0afd-4ea5-a5f3-e7c904cec9c7 req-e65b5ee9-1082-4ebd-8e65-8b15de2f1b8e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:28 compute-0 nova_compute[185191]: 2026-01-27 15:10:28.382 185195 DEBUG oslo_concurrency.lockutils [req-8719dc96-0afd-4ea5-a5f3-e7c904cec9c7 req-e65b5ee9-1082-4ebd-8e65-8b15de2f1b8e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:28 compute-0 nova_compute[185191]: 2026-01-27 15:10:28.400 185195 DEBUG oslo_concurrency.lockutils [req-8719dc96-0afd-4ea5-a5f3-e7c904cec9c7 req-e65b5ee9-1082-4ebd-8e65-8b15de2f1b8e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:28 compute-0 nova_compute[185191]: 2026-01-27 15:10:28.401 185195 DEBUG nova.compute.manager [req-8719dc96-0afd-4ea5-a5f3-e7c904cec9c7 req-e65b5ee9-1082-4ebd-8e65-8b15de2f1b8e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] No waiting events found dispatching network-vif-plugged-4c1725b6-637d-4572-927d-1137b3ba538c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:10:28 compute-0 nova_compute[185191]: 2026-01-27 15:10:28.402 185195 WARNING nova.compute.manager [req-8719dc96-0afd-4ea5-a5f3-e7c904cec9c7 req-e65b5ee9-1082-4ebd-8e65-8b15de2f1b8e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Received unexpected event network-vif-plugged-4c1725b6-637d-4572-927d-1137b3ba538c for instance with vm_state active and task_state None.
Jan 27 15:10:28 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:28.799 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[0b316114-0163-4b5d-a75d-7578fac2b821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:28 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:28.827 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[88c41fd5-145c-454a-bab1-e1490e9c1030]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:28 compute-0 NetworkManager[56090]: <info>  [1769526628.8288] manager: (tapd7e37fe5-60): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Jan 27 15:10:28 compute-0 systemd-udevd[238664]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:10:28 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:28.868 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[38bc0f8b-eaaa-4b4e-bcc9-cb6704ff7f94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:28 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:28.873 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[4f33b75e-de97-4499-9bd3-a488f7187a1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:28 compute-0 NetworkManager[56090]: <info>  [1769526628.9055] device (tapd7e37fe5-60): carrier: link connected
Jan 27 15:10:28 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:28.912 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[14ca6e56-15a7-407f-b136-b8d1c76d5522]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:28 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:28.938 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1b22c4b7-380d-4861-9798-9fc6dca38eb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7e37fe5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:72:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 420463, 'reachable_time': 36898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238683, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:28 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:28.960 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fcd3f068-c48b-44f0-8495-03fe86e30731]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:72c0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420463, 'tstamp': 420463}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238684, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:28 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:28.981 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[411174a1-7d55-47b6-86c5-31e31dc7accf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7e37fe5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:72:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 420463, 'reachable_time': 36898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 238685, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:29.020 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8776cff4-eb8e-4d66-b108-cdd2b246244b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:29.088 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0909b981-d5a4-4eec-b020-6dbb6ad1c485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:29.092 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7e37fe5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:29.093 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:29.094 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7e37fe5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:10:29 compute-0 nova_compute[185191]: 2026-01-27 15:10:29.097 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:29 compute-0 NetworkManager[56090]: <info>  [1769526629.0979] manager: (tapd7e37fe5-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 27 15:10:29 compute-0 kernel: tapd7e37fe5-60: entered promiscuous mode
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:29.101 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7e37fe5-60, col_values=(('external_ids', {'iface-id': 'd4262905-2cdc-4929-a155-db8204d90ca2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:10:29 compute-0 nova_compute[185191]: 2026-01-27 15:10:29.104 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:29.105 106793 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d7e37fe5-6354-4f61-95d0-78632be96811.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d7e37fe5-6354-4f61-95d0-78632be96811.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 15:10:29 compute-0 ovn_controller[97541]: 2026-01-27T15:10:29Z|00031|binding|INFO|Releasing lport d4262905-2cdc-4929-a155-db8204d90ca2 from this chassis (sb_readonly=0)
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:29.107 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9efdd7ff-b969-45dd-83d5-98565ded24fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:29.109 106793 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]: global
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     log         /dev/log local0 debug
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     log-tag     haproxy-metadata-proxy-d7e37fe5-6354-4f61-95d0-78632be96811
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     user        root
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     group       root
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     maxconn     1024
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     pidfile     /var/lib/neutron/external/pids/d7e37fe5-6354-4f61-95d0-78632be96811.pid.haproxy
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     daemon
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]: defaults
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     log global
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     mode http
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     option httplog
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     option dontlognull
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     option http-server-close
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     option forwardfor
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     retries                 3
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     timeout http-request    30s
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     timeout connect         30s
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     timeout client          32s
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     timeout server          32s
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     timeout http-keep-alive 30s
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]: listen listener
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     bind 169.254.169.254:80
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:     http-request add-header X-OVN-Network-ID d7e37fe5-6354-4f61-95d0-78632be96811
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 15:10:29 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:10:29.112 106793 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'env', 'PROCESS_TAG=haproxy-d7e37fe5-6354-4f61-95d0-78632be96811', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d7e37fe5-6354-4f61-95d0-78632be96811.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 15:10:29 compute-0 nova_compute[185191]: 2026-01-27 15:10:29.127 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:29 compute-0 podman[238718]: 2026-01-27 15:10:29.510064763 +0000 UTC m=+0.035683841 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 15:10:29 compute-0 podman[238718]: 2026-01-27 15:10:29.698117785 +0000 UTC m=+0.223736833 container create 642f8120ad0d77bd2706fb111c6656a127e11f7928cf7a920c0bb926d0d3e3ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 15:10:29 compute-0 podman[201073]: time="2026-01-27T15:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:10:29 compute-0 systemd[1]: Started libpod-conmon-642f8120ad0d77bd2706fb111c6656a127e11f7928cf7a920c0bb926d0d3e3ff.scope.
Jan 27 15:10:29 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da103f3c94a21cbe889112284815c180c05f87935ef6db3522aedd242b18909f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 15:10:29 compute-0 podman[238718]: 2026-01-27 15:10:29.87836673 +0000 UTC m=+0.403985798 container init 642f8120ad0d77bd2706fb111c6656a127e11f7928cf7a920c0bb926d0d3e3ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:10:29 compute-0 podman[238718]: 2026-01-27 15:10:29.88663508 +0000 UTC m=+0.412254128 container start 642f8120ad0d77bd2706fb111c6656a127e11f7928cf7a920c0bb926d0d3e3ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 15:10:29 compute-0 neutron-haproxy-ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811[238733]: [NOTICE]   (238737) : New worker (238739) forked
Jan 27 15:10:29 compute-0 neutron-haproxy-ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811[238733]: [NOTICE]   (238737) : Loading success.
Jan 27 15:10:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28504 "" "Go-http-client/1.1"
Jan 27 15:10:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4359 "" "Go-http-client/1.1"
Jan 27 15:10:30 compute-0 nova_compute[185191]: 2026-01-27 15:10:30.203 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:31 compute-0 podman[238748]: 2026-01-27 15:10:31.33697256 +0000 UTC m=+0.093099407 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:10:31 compute-0 openstack_network_exporter[204239]: ERROR   15:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:10:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:10:31 compute-0 openstack_network_exporter[204239]: ERROR   15:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:10:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:10:32 compute-0 nova_compute[185191]: 2026-01-27 15:10:32.308 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:35 compute-0 nova_compute[185191]: 2026-01-27 15:10:35.206 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:35 compute-0 podman[238770]: 2026-01-27 15:10:35.317707741 +0000 UTC m=+0.073939428 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, architecture=x86_64, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git, version=9.4, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 27 15:10:35 compute-0 podman[238771]: 2026-01-27 15:10:35.328416036 +0000 UTC m=+0.081876759 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:10:37 compute-0 nova_compute[185191]: 2026-01-27 15:10:37.311 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:40 compute-0 nova_compute[185191]: 2026-01-27 15:10:40.209 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:41 compute-0 nova_compute[185191]: 2026-01-27 15:10:41.597 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:41 compute-0 ovn_controller[97541]: 2026-01-27T15:10:41Z|00032|binding|INFO|Releasing lport d4262905-2cdc-4929-a155-db8204d90ca2 from this chassis (sb_readonly=0)
Jan 27 15:10:41 compute-0 NetworkManager[56090]: <info>  [1769526641.5990] manager: (patch-provnet-4c936f88-1f36-4f8e-9e54-5a1dee016d65-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Jan 27 15:10:41 compute-0 NetworkManager[56090]: <info>  [1769526641.6019] device (patch-provnet-4c936f88-1f36-4f8e-9e54-5a1dee016d65-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 15:10:41 compute-0 NetworkManager[56090]: <warn>  [1769526641.6022] device (patch-provnet-4c936f88-1f36-4f8e-9e54-5a1dee016d65-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 15:10:41 compute-0 NetworkManager[56090]: <info>  [1769526641.6075] manager: (patch-br-int-to-provnet-4c936f88-1f36-4f8e-9e54-5a1dee016d65): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Jan 27 15:10:41 compute-0 NetworkManager[56090]: <info>  [1769526641.6100] device (patch-br-int-to-provnet-4c936f88-1f36-4f8e-9e54-5a1dee016d65)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 15:10:41 compute-0 NetworkManager[56090]: <warn>  [1769526641.6101] device (patch-br-int-to-provnet-4c936f88-1f36-4f8e-9e54-5a1dee016d65)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 15:10:41 compute-0 NetworkManager[56090]: <info>  [1769526641.6152] manager: (patch-provnet-4c936f88-1f36-4f8e-9e54-5a1dee016d65-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Jan 27 15:10:41 compute-0 NetworkManager[56090]: <info>  [1769526641.6185] manager: (patch-br-int-to-provnet-4c936f88-1f36-4f8e-9e54-5a1dee016d65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 27 15:10:41 compute-0 NetworkManager[56090]: <info>  [1769526641.6213] device (patch-provnet-4c936f88-1f36-4f8e-9e54-5a1dee016d65-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 27 15:10:41 compute-0 nova_compute[185191]: 2026-01-27 15:10:41.624 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:41 compute-0 NetworkManager[56090]: <info>  [1769526641.6243] device (patch-br-int-to-provnet-4c936f88-1f36-4f8e-9e54-5a1dee016d65)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 27 15:10:41 compute-0 ovn_controller[97541]: 2026-01-27T15:10:41Z|00033|binding|INFO|Releasing lport d4262905-2cdc-4929-a155-db8204d90ca2 from this chassis (sb_readonly=0)
Jan 27 15:10:41 compute-0 nova_compute[185191]: 2026-01-27 15:10:41.642 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:42 compute-0 nova_compute[185191]: 2026-01-27 15:10:42.253 185195 DEBUG nova.compute.manager [req-5255e9f7-bbd3-46aa-861f-d584f0ba2cf7 req-6ac11b73-d220-4da6-9b36-3a21427eb31d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Received event network-changed-4c1725b6-637d-4572-927d-1137b3ba538c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:10:42 compute-0 nova_compute[185191]: 2026-01-27 15:10:42.255 185195 DEBUG nova.compute.manager [req-5255e9f7-bbd3-46aa-861f-d584f0ba2cf7 req-6ac11b73-d220-4da6-9b36-3a21427eb31d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Refreshing instance network info cache due to event network-changed-4c1725b6-637d-4572-927d-1137b3ba538c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:10:42 compute-0 nova_compute[185191]: 2026-01-27 15:10:42.256 185195 DEBUG oslo_concurrency.lockutils [req-5255e9f7-bbd3-46aa-861f-d584f0ba2cf7 req-6ac11b73-d220-4da6-9b36-3a21427eb31d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:10:42 compute-0 nova_compute[185191]: 2026-01-27 15:10:42.257 185195 DEBUG oslo_concurrency.lockutils [req-5255e9f7-bbd3-46aa-861f-d584f0ba2cf7 req-6ac11b73-d220-4da6-9b36-3a21427eb31d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:10:42 compute-0 nova_compute[185191]: 2026-01-27 15:10:42.257 185195 DEBUG nova.network.neutron [req-5255e9f7-bbd3-46aa-861f-d584f0ba2cf7 req-6ac11b73-d220-4da6-9b36-3a21427eb31d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Refreshing network info cache for port 4c1725b6-637d-4572-927d-1137b3ba538c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:10:42 compute-0 nova_compute[185191]: 2026-01-27 15:10:42.314 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:42 compute-0 podman[238813]: 2026-01-27 15:10:42.35234055 +0000 UTC m=+0.109601127 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:10:45 compute-0 nova_compute[185191]: 2026-01-27 15:10:45.211 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:45 compute-0 nova_compute[185191]: 2026-01-27 15:10:45.742 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:10:45 compute-0 nova_compute[185191]: 2026-01-27 15:10:45.765 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Triggering sync for uuid 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 15:10:45 compute-0 nova_compute[185191]: 2026-01-27 15:10:45.766 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:10:45 compute-0 nova_compute[185191]: 2026-01-27 15:10:45.767 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:10:45 compute-0 nova_compute[185191]: 2026-01-27 15:10:45.823 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:10:46 compute-0 nova_compute[185191]: 2026-01-27 15:10:46.138 185195 DEBUG nova.network.neutron [req-5255e9f7-bbd3-46aa-861f-d584f0ba2cf7 req-6ac11b73-d220-4da6-9b36-3a21427eb31d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updated VIF entry in instance network info cache for port 4c1725b6-637d-4572-927d-1137b3ba538c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:10:46 compute-0 nova_compute[185191]: 2026-01-27 15:10:46.139 185195 DEBUG nova.network.neutron [req-5255e9f7-bbd3-46aa-861f-d584f0ba2cf7 req-6ac11b73-d220-4da6-9b36-3a21427eb31d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updating instance_info_cache with network_info: [{"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:10:46 compute-0 nova_compute[185191]: 2026-01-27 15:10:46.177 185195 DEBUG oslo_concurrency.lockutils [req-5255e9f7-bbd3-46aa-861f-d584f0ba2cf7 req-6ac11b73-d220-4da6-9b36-3a21427eb31d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:10:47 compute-0 nova_compute[185191]: 2026-01-27 15:10:47.317 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:49 compute-0 podman[238837]: 2026-01-27 15:10:49.299869489 +0000 UTC m=+0.053533695 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 15:10:50 compute-0 nova_compute[185191]: 2026-01-27 15:10:50.213 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:50 compute-0 sshd-session[238856]: Invalid user solana from 2.57.122.238 port 37310
Jan 27 15:10:50 compute-0 sshd-session[238856]: Connection closed by invalid user solana 2.57.122.238 port 37310 [preauth]
Jan 27 15:10:52 compute-0 nova_compute[185191]: 2026-01-27 15:10:52.327 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:52 compute-0 podman[238858]: 2026-01-27 15:10:52.353263612 +0000 UTC m=+0.107234884 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:10:52 compute-0 podman[238860]: 2026-01-27 15:10:52.375766541 +0000 UTC m=+0.123867066 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter)
Jan 27 15:10:52 compute-0 podman[238859]: 2026-01-27 15:10:52.417739267 +0000 UTC m=+0.168207595 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Jan 27 15:10:55 compute-0 nova_compute[185191]: 2026-01-27 15:10:55.215 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:57 compute-0 nova_compute[185191]: 2026-01-27 15:10:57.331 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:10:59 compute-0 podman[201073]: time="2026-01-27T15:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:10:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:10:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4377 "" "Go-http-client/1.1"
Jan 27 15:11:00 compute-0 nova_compute[185191]: 2026-01-27 15:11:00.219 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:00.221 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:11:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:00.221 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:11:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:00.222 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:11:01 compute-0 openstack_network_exporter[204239]: ERROR   15:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:11:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:11:01 compute-0 openstack_network_exporter[204239]: ERROR   15:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:11:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:11:01 compute-0 ovn_controller[97541]: 2026-01-27T15:11:01Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:89:6e:c4 192.168.0.180
Jan 27 15:11:01 compute-0 ovn_controller[97541]: 2026-01-27T15:11:01Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:89:6e:c4 192.168.0.180
Jan 27 15:11:01 compute-0 podman[238936]: 2026-01-27 15:11:01.676278032 +0000 UTC m=+0.071110383 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 15:11:02 compute-0 nova_compute[185191]: 2026-01-27 15:11:02.339 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:05 compute-0 nova_compute[185191]: 2026-01-27 15:11:05.222 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:06 compute-0 podman[238959]: 2026-01-27 15:11:06.316547876 +0000 UTC m=+0.068493545 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:11:06 compute-0 podman[238958]: 2026-01-27 15:11:06.329391648 +0000 UTC m=+0.088028085 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, maintainer=Red Hat, Inc., container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, release=1214.1726694543, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9)
Jan 27 15:11:07 compute-0 nova_compute[185191]: 2026-01-27 15:11:07.341 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:10 compute-0 nova_compute[185191]: 2026-01-27 15:11:10.224 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.983 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.984 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ed71d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:11:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:10.994 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 15:11:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:11.362 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82c957adbc17ae7d91b95e243ef95edcae050b803dbf40e883e7549d3d32b40a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 15:11:11 compute-0 ovn_controller[97541]: 2026-01-27T15:11:11Z|00034|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Jan 27 15:11:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:11.910 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1850 Content-Type: application/json Date: Tue, 27 Jan 2026 15:11:11 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-9912b0a4-6b72-4279-972a-71889e3c1fbc x-openstack-request-id: req-9912b0a4-6b72-4279-972a-71889e3c1fbc _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 15:11:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:11.911 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3", "name": "test_0", "status": "ACTIVE", "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "user_id": "24260fb24da44b10b598f9c822c026b8", "metadata": {}, "hostId": "3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb", "image": {"id": "2b336e4b-c98e-4b97-9f8f-b3290e6b6caf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/2b336e4b-c98e-4b97-9f8f-b3290e6b6caf"}]}, "flavor": {"id": "26a24ace-a5af-47b3-9314-7d2b9e74c6b8", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/26a24ace-a5af-47b3-9314-7d2b9e74c6b8"}]}, "created": "2026-01-27T15:10:11Z", "updated": "2026-01-27T15:10:27Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.180", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:89:6e:c4"}, {"version": 4, "addr": "192.168.122.196", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:89:6e:c4"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-27T15:10:27.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 15:11:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:11.911 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 used request id req-9912b0a4-6b72-4279-972a-71889e3c1fbc request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 15:11:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:11.914 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8c4af6eb-340b-477f-83d2-11aa7ab0b9d3', 'name': 'test_0', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:11:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:11.914 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:11:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:11.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:11.914 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:11.915 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:11.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:11:11.914941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.017 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 3760086907 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.018 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 11291751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.019 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.020 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.020 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.020 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.020 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.020 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.021 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:11:12.020356) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:11:12.022319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.047 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.047 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.048 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.048 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.048 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.048 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.048 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.049 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:11:12.049002) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.054 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 / tap4c1725b6-63 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.054 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.054 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.054 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.055 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.055 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.055 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.055 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.055 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.055 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.055 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.056 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.056 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:11:12.055178) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.057 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:11:12.055982) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.057 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.057 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:11:12.056989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:11:12.057774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.087 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/cpu volume: 34830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.088 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.088 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.088 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.088 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.088 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.088 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.088 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.089 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.089 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.089 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.089 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.089 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.089 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.089 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.089 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.090 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.090 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/memory.usage volume: 49.6015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.090 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:11:12.088583) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.091 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:11:12.089354) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.091 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.091 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:11:12.090379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.091 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.091 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes volume: 1884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.091 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.091 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.091 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.091 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.092 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.092 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.092 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.093 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.093 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.093 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.093 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.093 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.093 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.094 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:11:12.091235) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.094 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T15:11:12.092019) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.094 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.094 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:11:12.093434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.094 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.094 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.094 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.094 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.095 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:11:12.094365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.094 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.095 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.095 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.095 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.095 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.095 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes volume: 1667 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.095 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.096 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.096 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.096 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.096 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:11:12.095488) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.096 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:11:12.096623) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.097 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.098 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.098 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.098 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.098 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.098 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.099 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.099 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.099 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.099 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.099 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.099 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.100 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.100 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.100 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.100 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.101 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.101 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.101 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.101 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.101 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.101 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.101 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.102 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.102 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.102 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.102 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.102 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:11:12.098860) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.102 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 1242591197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.102 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 114890665 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.103 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 113913681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.103 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:11:12.100197) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.103 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.104 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:11:12.101582) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.104 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:11:12.102741) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.104 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.104 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.104 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.104 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.104 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.104 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.105 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.105 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T15:11:12.104615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.105 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.105 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.105 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.105 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.106 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.106 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.106 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.106 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.107 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:11:12.105777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.107 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.107 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.107 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.107 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.107 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.108 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.109 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.109 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.109 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.109 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.109 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.109 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.110 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.110 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.110 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.110 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.110 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.111 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.111 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.111 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.113 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:11:12.107294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.113 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:11:12.109329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.113 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:11:12.110612) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:11:12.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:11:12 compute-0 nova_compute[185191]: 2026-01-27 15:11:12.344 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:13 compute-0 podman[239003]: 2026-01-27 15:11:13.333520781 +0000 UTC m=+0.074747172 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:11:15 compute-0 nova_compute[185191]: 2026-01-27 15:11:15.227 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:17 compute-0 nova_compute[185191]: 2026-01-27 15:11:17.347 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:17 compute-0 nova_compute[185191]: 2026-01-27 15:11:17.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:11:17 compute-0 nova_compute[185191]: 2026-01-27 15:11:17.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:11:17 compute-0 nova_compute[185191]: 2026-01-27 15:11:17.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:11:18 compute-0 nova_compute[185191]: 2026-01-27 15:11:18.168 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:11:18 compute-0 nova_compute[185191]: 2026-01-27 15:11:18.169 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:11:18 compute-0 nova_compute[185191]: 2026-01-27 15:11:18.169 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:11:18 compute-0 nova_compute[185191]: 2026-01-27 15:11:18.170 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:11:18 compute-0 nova_compute[185191]: 2026-01-27 15:11:18.546 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:11:18 compute-0 nova_compute[185191]: 2026-01-27 15:11:18.607 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:11:18 compute-0 nova_compute[185191]: 2026-01-27 15:11:18.608 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:11:18 compute-0 nova_compute[185191]: 2026-01-27 15:11:18.663 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:11:18 compute-0 nova_compute[185191]: 2026-01-27 15:11:18.664 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:11:18 compute-0 nova_compute[185191]: 2026-01-27 15:11:18.723 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:11:18 compute-0 nova_compute[185191]: 2026-01-27 15:11:18.724 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:11:18 compute-0 nova_compute[185191]: 2026-01-27 15:11:18.783 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:11:19 compute-0 nova_compute[185191]: 2026-01-27 15:11:19.090 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:11:19 compute-0 nova_compute[185191]: 2026-01-27 15:11:19.091 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5253MB free_disk=72.42540740966797GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:11:19 compute-0 nova_compute[185191]: 2026-01-27 15:11:19.091 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:11:19 compute-0 nova_compute[185191]: 2026-01-27 15:11:19.092 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:11:19 compute-0 nova_compute[185191]: 2026-01-27 15:11:19.923 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:11:19 compute-0 nova_compute[185191]: 2026-01-27 15:11:19.924 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:11:19 compute-0 nova_compute[185191]: 2026-01-27 15:11:19.924 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:11:19 compute-0 nova_compute[185191]: 2026-01-27 15:11:19.972 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:11:20 compute-0 nova_compute[185191]: 2026-01-27 15:11:20.122 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:11:20 compute-0 nova_compute[185191]: 2026-01-27 15:11:20.230 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:20 compute-0 podman[239041]: 2026-01-27 15:11:20.325200603 +0000 UTC m=+0.073465138 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Jan 27 15:11:20 compute-0 nova_compute[185191]: 2026-01-27 15:11:20.571 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:11:20 compute-0 nova_compute[185191]: 2026-01-27 15:11:20.571 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:11:21 compute-0 nova_compute[185191]: 2026-01-27 15:11:21.571 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:11:21 compute-0 nova_compute[185191]: 2026-01-27 15:11:21.571 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:11:21 compute-0 nova_compute[185191]: 2026-01-27 15:11:21.571 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:11:21 compute-0 nova_compute[185191]: 2026-01-27 15:11:21.572 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:11:22 compute-0 nova_compute[185191]: 2026-01-27 15:11:22.350 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:22 compute-0 nova_compute[185191]: 2026-01-27 15:11:22.391 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:11:22 compute-0 nova_compute[185191]: 2026-01-27 15:11:22.391 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:11:22 compute-0 nova_compute[185191]: 2026-01-27 15:11:22.391 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:11:22 compute-0 nova_compute[185191]: 2026-01-27 15:11:22.391 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:11:23 compute-0 podman[239060]: 2026-01-27 15:11:23.34003384 +0000 UTC m=+0.100612361 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260126, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Jan 27 15:11:23 compute-0 podman[239061]: 2026-01-27 15:11:23.344705064 +0000 UTC m=+0.099549042 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:11:23 compute-0 podman[239062]: 2026-01-27 15:11:23.352165823 +0000 UTC m=+0.103027705 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, version=9.6, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible)
Jan 27 15:11:24 compute-0 nova_compute[185191]: 2026-01-27 15:11:24.413 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updating instance_info_cache with network_info: [{"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:11:24 compute-0 nova_compute[185191]: 2026-01-27 15:11:24.482 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:11:24 compute-0 nova_compute[185191]: 2026-01-27 15:11:24.483 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:11:24 compute-0 nova_compute[185191]: 2026-01-27 15:11:24.484 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:11:24 compute-0 nova_compute[185191]: 2026-01-27 15:11:24.484 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:11:24 compute-0 nova_compute[185191]: 2026-01-27 15:11:24.485 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:11:24 compute-0 nova_compute[185191]: 2026-01-27 15:11:24.485 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:11:25 compute-0 nova_compute[185191]: 2026-01-27 15:11:25.232 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:25 compute-0 nova_compute[185191]: 2026-01-27 15:11:25.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:11:27 compute-0 nova_compute[185191]: 2026-01-27 15:11:27.354 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:29 compute-0 podman[201073]: time="2026-01-27T15:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:11:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:11:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4367 "" "Go-http-client/1.1"
Jan 27 15:11:30 compute-0 nova_compute[185191]: 2026-01-27 15:11:30.234 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:31 compute-0 openstack_network_exporter[204239]: ERROR   15:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:11:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:11:31 compute-0 openstack_network_exporter[204239]: ERROR   15:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:11:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:11:32 compute-0 podman[239123]: 2026-01-27 15:11:32.337836798 +0000 UTC m=+0.094412086 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:11:32 compute-0 nova_compute[185191]: 2026-01-27 15:11:32.357 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:35 compute-0 nova_compute[185191]: 2026-01-27 15:11:35.237 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:37 compute-0 podman[239151]: 2026-01-27 15:11:37.329758907 +0000 UTC m=+0.070645903 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:11:37 compute-0 podman[239150]: 2026-01-27 15:11:37.353320234 +0000 UTC m=+0.090373578 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, version=9.4, com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, name=ubi9, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9)
Jan 27 15:11:37 compute-0 nova_compute[185191]: 2026-01-27 15:11:37.360 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:37 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:37.713 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:11:37 compute-0 nova_compute[185191]: 2026-01-27 15:11:37.714 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:37 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:37.714 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:11:40 compute-0 nova_compute[185191]: 2026-01-27 15:11:40.239 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:42 compute-0 nova_compute[185191]: 2026-01-27 15:11:42.363 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:43 compute-0 nova_compute[185191]: 2026-01-27 15:11:43.670 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:11:43 compute-0 nova_compute[185191]: 2026-01-27 15:11:43.670 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:11:43 compute-0 nova_compute[185191]: 2026-01-27 15:11:43.701 185195 DEBUG nova.compute.manager [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 15:11:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:43.716 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:11:43 compute-0 nova_compute[185191]: 2026-01-27 15:11:43.830 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:11:43 compute-0 nova_compute[185191]: 2026-01-27 15:11:43.831 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:11:43 compute-0 nova_compute[185191]: 2026-01-27 15:11:43.840 185195 DEBUG nova.virt.hardware [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 15:11:43 compute-0 nova_compute[185191]: 2026-01-27 15:11:43.840 185195 INFO nova.compute.claims [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Claim successful on node compute-0.ctlplane.example.com
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.106 185195 DEBUG nova.compute.provider_tree [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.166 185195 DEBUG nova.scheduler.client.report [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:11:44 compute-0 podman[239193]: 2026-01-27 15:11:44.304524831 +0000 UTC m=+0.061193481 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.381 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.382 185195 DEBUG nova.compute.manager [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.463 185195 DEBUG nova.compute.manager [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.464 185195 DEBUG nova.network.neutron [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.522 185195 INFO nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.579 185195 DEBUG nova.compute.manager [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.807 185195 DEBUG nova.compute.manager [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.809 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.809 185195 INFO nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Creating image(s)
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.810 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "/var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.811 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.813 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.831 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.893 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.894 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.895 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.907 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.964 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:11:44 compute-0 nova_compute[185191]: 2026-01-27 15:11:44.965 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9,backing_fmt=raw /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.004 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9,backing_fmt=raw /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.005 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.006 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.063 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.064 185195 DEBUG nova.virt.disk.api [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Checking if we can resize image /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.065 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.138 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.141 185195 DEBUG nova.virt.disk.api [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Cannot resize image /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.142 185195 DEBUG nova.objects.instance [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lazy-loading 'migration_context' on Instance uuid b98b01bd-8dfe-4188-be2f-ebffe0bd1717 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.188 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "/var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.189 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.189 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.201 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.242 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.278 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.279 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.279 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.295 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.377 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.379 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.441 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 1073741824" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.442 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.443 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.504 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.506 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.507 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Ensure instance console log exists: /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.509 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.510 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:11:45 compute-0 nova_compute[185191]: 2026-01-27 15:11:45.511 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:11:47 compute-0 nova_compute[185191]: 2026-01-27 15:11:47.366 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:47 compute-0 nova_compute[185191]: 2026-01-27 15:11:47.380 185195 DEBUG nova.network.neutron [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Successfully updated port: 62a0d85c-d24f-4ada-af0a-2b902803778f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 15:11:47 compute-0 nova_compute[185191]: 2026-01-27 15:11:47.415 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:11:47 compute-0 nova_compute[185191]: 2026-01-27 15:11:47.416 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquired lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:11:47 compute-0 nova_compute[185191]: 2026-01-27 15:11:47.417 185195 DEBUG nova.network.neutron [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:11:47 compute-0 nova_compute[185191]: 2026-01-27 15:11:47.509 185195 DEBUG nova.compute.manager [req-e14836d6-2bcb-4970-a91c-05982088d227 req-b5ca62d1-7e6e-4bc0-b041-096749be12b7 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Received event network-changed-62a0d85c-d24f-4ada-af0a-2b902803778f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:11:47 compute-0 nova_compute[185191]: 2026-01-27 15:11:47.509 185195 DEBUG nova.compute.manager [req-e14836d6-2bcb-4970-a91c-05982088d227 req-b5ca62d1-7e6e-4bc0-b041-096749be12b7 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Refreshing instance network info cache due to event network-changed-62a0d85c-d24f-4ada-af0a-2b902803778f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:11:47 compute-0 nova_compute[185191]: 2026-01-27 15:11:47.509 185195 DEBUG oslo_concurrency.lockutils [req-e14836d6-2bcb-4970-a91c-05982088d227 req-b5ca62d1-7e6e-4bc0-b041-096749be12b7 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:11:47 compute-0 nova_compute[185191]: 2026-01-27 15:11:47.601 185195 DEBUG nova.network.neutron [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.636 185195 DEBUG nova.network.neutron [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Updating instance_info_cache with network_info: [{"id": "62a0d85c-d24f-4ada-af0a-2b902803778f", "address": "fa:16:3e:f3:86:b3", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.246", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a0d85c-d2", "ovs_interfaceid": "62a0d85c-d24f-4ada-af0a-2b902803778f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.888 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Releasing lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.889 185195 DEBUG nova.compute.manager [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Instance network_info: |[{"id": "62a0d85c-d24f-4ada-af0a-2b902803778f", "address": "fa:16:3e:f3:86:b3", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.246", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a0d85c-d2", "ovs_interfaceid": "62a0d85c-d24f-4ada-af0a-2b902803778f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.889 185195 DEBUG oslo_concurrency.lockutils [req-e14836d6-2bcb-4970-a91c-05982088d227 req-b5ca62d1-7e6e-4bc0-b041-096749be12b7 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.890 185195 DEBUG nova.network.neutron [req-e14836d6-2bcb-4970-a91c-05982088d227 req-b5ca62d1-7e6e-4bc0-b041-096749be12b7 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Refreshing network info cache for port 62a0d85c-d24f-4ada-af0a-2b902803778f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.893 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Start _get_guest_xml network_info=[{"id": "62a0d85c-d24f-4ada-af0a-2b902803778f", "address": "fa:16:3e:f3:86:b3", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.246", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a0d85c-d2", "ovs_interfaceid": "62a0d85c-d24f-4ada-af0a-2b902803778f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T15:08:48Z,direct_url=<?>,disk_format='qcow2',id=2b336e4b-c98e-4b97-9f8f-b3290e6b6caf,min_disk=0,min_ram=0,name='cirros',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T15:08:49Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}], 'ephemerals': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'size': 1, 'guest_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.900 185195 WARNING nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.908 185195 DEBUG nova.virt.libvirt.host [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.908 185195 DEBUG nova.virt.libvirt.host [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.913 185195 DEBUG nova.virt.libvirt.host [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.914 185195 DEBUG nova.virt.libvirt.host [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.914 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.915 185195 DEBUG nova.virt.hardware [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T15:08:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='26a24ace-a5af-47b3-9314-7d2b9e74c6b8',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T15:08:48Z,direct_url=<?>,disk_format='qcow2',id=2b336e4b-c98e-4b97-9f8f-b3290e6b6caf,min_disk=0,min_ram=0,name='cirros',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T15:08:49Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.915 185195 DEBUG nova.virt.hardware [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.916 185195 DEBUG nova.virt.hardware [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.916 185195 DEBUG nova.virt.hardware [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.916 185195 DEBUG nova.virt.hardware [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.917 185195 DEBUG nova.virt.hardware [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.917 185195 DEBUG nova.virt.hardware [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.917 185195 DEBUG nova.virt.hardware [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.917 185195 DEBUG nova.virt.hardware [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.918 185195 DEBUG nova.virt.hardware [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.918 185195 DEBUG nova.virt.hardware [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.922 185195 DEBUG nova.virt.libvirt.vif [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:11:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4',id=2,image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='92e45285-9077-420c-bb23-df5c16dca6b3'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd88ca4062da4fb9bedb3a0002a43c12',ramdisk_id='',reservation_id='r-d0vhof01',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:11:44Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zNDMzMjk0Mzg5OTc3Nzc1Njk0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM0MzMyOTQzODk5Nzc3NzU2OTQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzQzMzI5NDM4OTk3Nzc3NTY5ND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM0MzMyOTQzODk5Nzc3NzU2OTQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zNDMzMjk0Mzg5OTc3Nzc1Njk0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zNDMzMjk0Mzg5OTc3Nzc1Njk0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 27 15:11:49 compute-0 nova_compute[185191]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzQzMzI5NDM4OTk3Nzc3NTY5ND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM0MzMyOTQzODk5Nzc3NzU2OTQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zNDMzMjk0Mzg5OTc3Nzc1Njk0PT0tLQo=',user_id='24260fb24da44b10b598f9c822c026b8',uuid=b98b01bd-8dfe-4188-be2f-ebffe0bd1717,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62a0d85c-d24f-4ada-af0a-2b902803778f", "address": "fa:16:3e:f3:86:b3", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.246", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a0d85c-d2", "ovs_interfaceid": "62a0d85c-d24f-4ada-af0a-2b902803778f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.922 185195 DEBUG nova.network.os_vif_util [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converting VIF {"id": "62a0d85c-d24f-4ada-af0a-2b902803778f", "address": "fa:16:3e:f3:86:b3", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.246", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a0d85c-d2", "ovs_interfaceid": "62a0d85c-d24f-4ada-af0a-2b902803778f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.923 185195 DEBUG nova.network.os_vif_util [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:86:b3,bridge_name='br-int',has_traffic_filtering=True,id=62a0d85c-d24f-4ada-af0a-2b902803778f,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap62a0d85c-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:11:49 compute-0 nova_compute[185191]: 2026-01-27 15:11:49.924 185195 DEBUG nova.objects.instance [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lazy-loading 'pci_devices' on Instance uuid b98b01bd-8dfe-4188-be2f-ebffe0bd1717 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.027 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] End _get_guest_xml xml=<domain type="kvm">
Jan 27 15:11:50 compute-0 nova_compute[185191]:   <uuid>b98b01bd-8dfe-4188-be2f-ebffe0bd1717</uuid>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   <name>instance-00000002</name>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   <memory>524288</memory>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   <vcpu>1</vcpu>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   <metadata>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <nova:name>vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4</nova:name>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <nova:creationTime>2026-01-27 15:11:49</nova:creationTime>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <nova:flavor name="m1.small">
Jan 27 15:11:50 compute-0 nova_compute[185191]:         <nova:memory>512</nova:memory>
Jan 27 15:11:50 compute-0 nova_compute[185191]:         <nova:disk>1</nova:disk>
Jan 27 15:11:50 compute-0 nova_compute[185191]:         <nova:swap>0</nova:swap>
Jan 27 15:11:50 compute-0 nova_compute[185191]:         <nova:ephemeral>1</nova:ephemeral>
Jan 27 15:11:50 compute-0 nova_compute[185191]:         <nova:vcpus>1</nova:vcpus>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       </nova:flavor>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <nova:owner>
Jan 27 15:11:50 compute-0 nova_compute[185191]:         <nova:user uuid="24260fb24da44b10b598f9c822c026b8">admin</nova:user>
Jan 27 15:11:50 compute-0 nova_compute[185191]:         <nova:project uuid="dd88ca4062da4fb9bedb3a0002a43c12">admin</nova:project>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       </nova:owner>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <nova:root type="image" uuid="2b336e4b-c98e-4b97-9f8f-b3290e6b6caf"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <nova:ports>
Jan 27 15:11:50 compute-0 nova_compute[185191]:         <nova:port uuid="62a0d85c-d24f-4ada-af0a-2b902803778f">
Jan 27 15:11:50 compute-0 nova_compute[185191]:           <nova:ip type="fixed" address="192.168.0.246" ipVersion="4"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:         </nova:port>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       </nova:ports>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     </nova:instance>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   </metadata>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   <sysinfo type="smbios">
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <system>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <entry name="manufacturer">RDO</entry>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <entry name="product">OpenStack Compute</entry>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <entry name="serial">b98b01bd-8dfe-4188-be2f-ebffe0bd1717</entry>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <entry name="uuid">b98b01bd-8dfe-4188-be2f-ebffe0bd1717</entry>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <entry name="family">Virtual Machine</entry>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     </system>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   </sysinfo>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   <os>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <boot dev="hd"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <smbios mode="sysinfo"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   </os>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   <features>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <acpi/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <apic/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <vmcoreinfo/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   </features>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   <clock offset="utc">
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <timer name="hpet" present="no"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   </clock>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   <cpu mode="host-model" match="exact">
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <target dev="vda" bus="virtio"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <target dev="vdb" bus="virtio"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <disk type="file" device="cdrom">
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.config"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <target dev="sda" bus="sata"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <interface type="ethernet">
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <mac address="fa:16:3e:f3:86:b3"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <mtu size="1442"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <target dev="tap62a0d85c-d2"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <serial type="pty">
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <log file="/var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/console.log" append="off"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     </serial>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <video>
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     </video>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <input type="tablet" bus="usb"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <rng model="virtio">
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <backend model="random">/dev/urandom</backend>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <controller type="usb" index="0"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     <memballoon model="virtio">
Jan 27 15:11:50 compute-0 nova_compute[185191]:       <stats period="10"/>
Jan 27 15:11:50 compute-0 nova_compute[185191]:     </memballoon>
Jan 27 15:11:50 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:11:50 compute-0 nova_compute[185191]: </domain>
Jan 27 15:11:50 compute-0 nova_compute[185191]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.029 185195 DEBUG nova.compute.manager [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Preparing to wait for external event network-vif-plugged-62a0d85c-d24f-4ada-af0a-2b902803778f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.029 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.030 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.030 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.031 185195 DEBUG nova.virt.libvirt.vif [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:11:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4',id=2,image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='92e45285-9077-420c-bb23-df5c16dca6b3'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd88ca4062da4fb9bedb3a0002a43c12',ramdisk_id='',reservation_id='r-d0vhof01',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:11:44Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zNDMzMjk0Mzg5OTc3Nzc1Njk0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM0MzMyOTQzODk5Nzc3NzU2OTQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzQzMzI5NDM4OTk3Nzc3NTY5ND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM0MzMyOTQzODk5Nzc3NzU2OTQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zNDMzMjk0Mzg5OTc3Nzc1Njk0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zNDMzMjk0Mzg5OTc3Nzc1Njk0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 27 15:11:50 compute-0 nova_compute[185191]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzQzMzI5NDM4OTk3Nzc3NTY5ND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM0MzMyOTQzODk5Nzc3NzU2OTQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zNDMzMjk0Mzg5OTc3Nzc1Njk0PT0tLQo=',user_id='24260fb24da44b10b598f9c822c026b8',uuid=b98b01bd-8dfe-4188-be2f-ebffe0bd1717,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62a0d85c-d24f-4ada-af0a-2b902803778f", "address": "fa:16:3e:f3:86:b3", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.246", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a0d85c-d2", "ovs_interfaceid": "62a0d85c-d24f-4ada-af0a-2b902803778f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.031 185195 DEBUG nova.network.os_vif_util [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converting VIF {"id": "62a0d85c-d24f-4ada-af0a-2b902803778f", "address": "fa:16:3e:f3:86:b3", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.246", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a0d85c-d2", "ovs_interfaceid": "62a0d85c-d24f-4ada-af0a-2b902803778f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.032 185195 DEBUG nova.network.os_vif_util [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:86:b3,bridge_name='br-int',has_traffic_filtering=True,id=62a0d85c-d24f-4ada-af0a-2b902803778f,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap62a0d85c-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.032 185195 DEBUG os_vif [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:86:b3,bridge_name='br-int',has_traffic_filtering=True,id=62a0d85c-d24f-4ada-af0a-2b902803778f,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap62a0d85c-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.033 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.033 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.034 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.037 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.037 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62a0d85c-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.038 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap62a0d85c-d2, col_values=(('external_ids', {'iface-id': '62a0d85c-d24f-4ada-af0a-2b902803778f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:86:b3', 'vm-uuid': 'b98b01bd-8dfe-4188-be2f-ebffe0bd1717'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.039 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:50 compute-0 NetworkManager[56090]: <info>  [1769526710.0406] manager: (tap62a0d85c-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.042 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.048 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.050 185195 INFO os_vif [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:86:b3,bridge_name='br-int',has_traffic_filtering=True,id=62a0d85c-d24f-4ada-af0a-2b902803778f,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap62a0d85c-d2')
Jan 27 15:11:50 compute-0 rsyslogd[235702]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 15:11:49.922 185195 DEBUG nova.virt.libvirt.vif [None req-27b8e47c-ff [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.245 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:50 compute-0 rsyslogd[235702]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 15:11:50.031 185195 DEBUG nova.virt.libvirt.vif [None req-27b8e47c-ff [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.443 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.444 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.445 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.445 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No VIF found with MAC fa:16:3e:f3:86:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 15:11:50 compute-0 nova_compute[185191]: 2026-01-27 15:11:50.446 185195 INFO nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Using config drive
Jan 27 15:11:51 compute-0 podman[239247]: 2026-01-27 15:11:51.337846163 +0000 UTC m=+0.098494464 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 15:11:51 compute-0 nova_compute[185191]: 2026-01-27 15:11:51.649 185195 INFO nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Creating config drive at /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.config
Jan 27 15:11:51 compute-0 nova_compute[185191]: 2026-01-27 15:11:51.655 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9bqti2gz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:11:51 compute-0 nova_compute[185191]: 2026-01-27 15:11:51.780 185195 DEBUG oslo_concurrency.processutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9bqti2gz" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:11:51 compute-0 nova_compute[185191]: 2026-01-27 15:11:51.841 185195 DEBUG nova.network.neutron [req-e14836d6-2bcb-4970-a91c-05982088d227 req-b5ca62d1-7e6e-4bc0-b041-096749be12b7 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Updated VIF entry in instance network info cache for port 62a0d85c-d24f-4ada-af0a-2b902803778f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:11:51 compute-0 nova_compute[185191]: 2026-01-27 15:11:51.841 185195 DEBUG nova.network.neutron [req-e14836d6-2bcb-4970-a91c-05982088d227 req-b5ca62d1-7e6e-4bc0-b041-096749be12b7 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Updating instance_info_cache with network_info: [{"id": "62a0d85c-d24f-4ada-af0a-2b902803778f", "address": "fa:16:3e:f3:86:b3", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.246", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a0d85c-d2", "ovs_interfaceid": "62a0d85c-d24f-4ada-af0a-2b902803778f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:11:51 compute-0 NetworkManager[56090]: <info>  [1769526711.8518] manager: (tap62a0d85c-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Jan 27 15:11:51 compute-0 kernel: tap62a0d85c-d2: entered promiscuous mode
Jan 27 15:11:51 compute-0 ovn_controller[97541]: 2026-01-27T15:11:51Z|00035|binding|INFO|Claiming lport 62a0d85c-d24f-4ada-af0a-2b902803778f for this chassis.
Jan 27 15:11:51 compute-0 ovn_controller[97541]: 2026-01-27T15:11:51Z|00036|binding|INFO|62a0d85c-d24f-4ada-af0a-2b902803778f: Claiming fa:16:3e:f3:86:b3 192.168.0.246
Jan 27 15:11:51 compute-0 nova_compute[185191]: 2026-01-27 15:11:51.854 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:51 compute-0 ovn_controller[97541]: 2026-01-27T15:11:51Z|00037|binding|INFO|Setting lport 62a0d85c-d24f-4ada-af0a-2b902803778f ovn-installed in OVS
Jan 27 15:11:51 compute-0 nova_compute[185191]: 2026-01-27 15:11:51.870 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:51 compute-0 nova_compute[185191]: 2026-01-27 15:11:51.875 185195 DEBUG oslo_concurrency.lockutils [req-e14836d6-2bcb-4970-a91c-05982088d227 req-b5ca62d1-7e6e-4bc0-b041-096749be12b7 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:11:51 compute-0 nova_compute[185191]: 2026-01-27 15:11:51.878 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:51 compute-0 systemd-machined[156506]: New machine qemu-2-instance-00000002.
Jan 27 15:11:51 compute-0 systemd-udevd[239286]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:11:51 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Jan 27 15:11:51 compute-0 NetworkManager[56090]: <info>  [1769526711.9237] device (tap62a0d85c-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 15:11:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:51.922 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:86:b3 192.168.0.246'], port_security=['fa:16:3e:f3:86:b3 192.168.0.246'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-smi76etv33tn-xakll53pfa3s-f35zx7ec3yvf-port-arkyrtjcq7v6', 'neutron:cidrs': '192.168.0.246/24', 'neutron:device_id': 'b98b01bd-8dfe-4188-be2f-ebffe0bd1717', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7e37fe5-6354-4f61-95d0-78632be96811', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-smi76etv33tn-xakll53pfa3s-f35zx7ec3yvf-port-arkyrtjcq7v6', 'neutron:project_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'neutron:revision_number': '2', 'neutron:security_group_ids': '812ec3a5-800e-4a9a-a5c1-7429aedf7716', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=764c6ac9-6147-480d-b23c-048fbe883747, chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=62a0d85c-d24f-4ada-af0a-2b902803778f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:11:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:51.923 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 62a0d85c-d24f-4ada-af0a-2b902803778f in datapath d7e37fe5-6354-4f61-95d0-78632be96811 bound to our chassis
Jan 27 15:11:51 compute-0 ovn_controller[97541]: 2026-01-27T15:11:51Z|00038|binding|INFO|Setting lport 62a0d85c-d24f-4ada-af0a-2b902803778f up in Southbound
Jan 27 15:11:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:51.924 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7e37fe5-6354-4f61-95d0-78632be96811
Jan 27 15:11:51 compute-0 NetworkManager[56090]: <info>  [1769526711.9276] device (tap62a0d85c-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 15:11:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:51.938 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[aad8f455-273b-4a7a-a71c-0b45080e7fd6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:11:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:51.966 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[e36dcdb6-0f14-402f-a474-b4e4d2e64f53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:11:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:51.969 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3aecea-a3b2-4216-b86d-0a387b66c817]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:11:52 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:51.999 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[d95ecf25-832b-4cc2-b453-50f43d6667cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:11:52 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:52.021 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[17a5d64b-3ebb-42cd-a962-8061bf28b4f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7e37fe5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:72:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 420463, 'reachable_time': 36898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239300, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:11:52 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:52.039 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3405a35e-6008-4f36-a2cd-4b20ffc3d2e3]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapd7e37fe5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420478, 'tstamp': 420478}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239301, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7e37fe5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420481, 'tstamp': 420481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239301, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:11:52 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:52.041 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7e37fe5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.043 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.044 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:52 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:52.045 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7e37fe5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:11:52 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:52.045 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:11:52 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:52.045 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7e37fe5-60, col_values=(('external_ids', {'iface-id': 'd4262905-2cdc-4929-a155-db8204d90ca2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:11:52 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:11:52.045 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.211 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769526712.210278, b98b01bd-8dfe-4188-be2f-ebffe0bd1717 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.211 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] VM Started (Lifecycle Event)
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.249 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.255 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769526712.2114286, b98b01bd-8dfe-4188-be2f-ebffe0bd1717 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.255 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] VM Paused (Lifecycle Event)
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.301 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.306 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.319 185195 DEBUG nova.compute.manager [req-81eba076-5cb3-4e6e-a763-d92f952608b0 req-a1e8d112-a9b0-46ab-904c-24921a49582d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Received event network-vif-plugged-62a0d85c-d24f-4ada-af0a-2b902803778f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.320 185195 DEBUG oslo_concurrency.lockutils [req-81eba076-5cb3-4e6e-a763-d92f952608b0 req-a1e8d112-a9b0-46ab-904c-24921a49582d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.320 185195 DEBUG oslo_concurrency.lockutils [req-81eba076-5cb3-4e6e-a763-d92f952608b0 req-a1e8d112-a9b0-46ab-904c-24921a49582d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.321 185195 DEBUG oslo_concurrency.lockutils [req-81eba076-5cb3-4e6e-a763-d92f952608b0 req-a1e8d112-a9b0-46ab-904c-24921a49582d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.321 185195 DEBUG nova.compute.manager [req-81eba076-5cb3-4e6e-a763-d92f952608b0 req-a1e8d112-a9b0-46ab-904c-24921a49582d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Processing event network-vif-plugged-62a0d85c-d24f-4ada-af0a-2b902803778f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.322 185195 DEBUG nova.compute.manager [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.326 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.330 185195 INFO nova.virt.libvirt.driver [-] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Instance spawned successfully.
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.331 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.338 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.338 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769526712.3255534, b98b01bd-8dfe-4188-be2f-ebffe0bd1717 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.339 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] VM Resumed (Lifecycle Event)
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.383 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.391 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.392 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.392 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.393 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.394 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.395 185195 DEBUG nova.virt.libvirt.driver [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.400 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.435 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.472 185195 INFO nova.compute.manager [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Took 7.66 seconds to spawn the instance on the hypervisor.
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.472 185195 DEBUG nova.compute.manager [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.564 185195 INFO nova.compute.manager [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Took 8.78 seconds to build instance.
Jan 27 15:11:52 compute-0 nova_compute[185191]: 2026-01-27 15:11:52.708 185195 DEBUG oslo_concurrency.lockutils [None req-27b8e47c-ffd8-423d-9554-d5a5d3a62c03 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:11:54 compute-0 podman[239309]: 2026-01-27 15:11:54.326608766 +0000 UTC m=+0.078034198 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 27 15:11:54 compute-0 podman[239311]: 2026-01-27 15:11:54.32938324 +0000 UTC m=+0.074979737 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-type=git, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Jan 27 15:11:54 compute-0 podman[239310]: 2026-01-27 15:11:54.387543939 +0000 UTC m=+0.137905263 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:11:54 compute-0 nova_compute[185191]: 2026-01-27 15:11:54.525 185195 DEBUG nova.compute.manager [req-b829be66-909a-4a33-b5cc-8bd7e25f5894 req-c6def45d-7f6e-413a-b126-e77c0601b473 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Received event network-vif-plugged-62a0d85c-d24f-4ada-af0a-2b902803778f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:11:54 compute-0 nova_compute[185191]: 2026-01-27 15:11:54.525 185195 DEBUG oslo_concurrency.lockutils [req-b829be66-909a-4a33-b5cc-8bd7e25f5894 req-c6def45d-7f6e-413a-b126-e77c0601b473 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:11:54 compute-0 nova_compute[185191]: 2026-01-27 15:11:54.525 185195 DEBUG oslo_concurrency.lockutils [req-b829be66-909a-4a33-b5cc-8bd7e25f5894 req-c6def45d-7f6e-413a-b126-e77c0601b473 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:11:54 compute-0 nova_compute[185191]: 2026-01-27 15:11:54.526 185195 DEBUG oslo_concurrency.lockutils [req-b829be66-909a-4a33-b5cc-8bd7e25f5894 req-c6def45d-7f6e-413a-b126-e77c0601b473 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:11:54 compute-0 nova_compute[185191]: 2026-01-27 15:11:54.526 185195 DEBUG nova.compute.manager [req-b829be66-909a-4a33-b5cc-8bd7e25f5894 req-c6def45d-7f6e-413a-b126-e77c0601b473 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] No waiting events found dispatching network-vif-plugged-62a0d85c-d24f-4ada-af0a-2b902803778f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:11:54 compute-0 nova_compute[185191]: 2026-01-27 15:11:54.526 185195 WARNING nova.compute.manager [req-b829be66-909a-4a33-b5cc-8bd7e25f5894 req-c6def45d-7f6e-413a-b126-e77c0601b473 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Received unexpected event network-vif-plugged-62a0d85c-d24f-4ada-af0a-2b902803778f for instance with vm_state active and task_state None.
Jan 27 15:11:55 compute-0 nova_compute[185191]: 2026-01-27 15:11:55.041 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:55 compute-0 nova_compute[185191]: 2026-01-27 15:11:55.247 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:11:59 compute-0 podman[201073]: time="2026-01-27T15:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:11:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:11:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Jan 27 15:12:00 compute-0 nova_compute[185191]: 2026-01-27 15:12:00.044 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:12:00.222 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:12:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:12:00.222 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:12:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:12:00.223 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:12:00 compute-0 nova_compute[185191]: 2026-01-27 15:12:00.249 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:01 compute-0 openstack_network_exporter[204239]: ERROR   15:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:12:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:12:01 compute-0 openstack_network_exporter[204239]: ERROR   15:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:12:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:12:03 compute-0 podman[239369]: 2026-01-27 15:12:03.317112631 +0000 UTC m=+0.069834111 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 15:12:05 compute-0 nova_compute[185191]: 2026-01-27 15:12:05.047 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:05 compute-0 nova_compute[185191]: 2026-01-27 15:12:05.251 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:08 compute-0 podman[239388]: 2026-01-27 15:12:08.361167308 +0000 UTC m=+0.113028741 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, config_id=kepler, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, architecture=x86_64, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30)
Jan 27 15:12:08 compute-0 podman[239389]: 2026-01-27 15:12:08.373569798 +0000 UTC m=+0.097528948 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:12:10 compute-0 nova_compute[185191]: 2026-01-27 15:12:10.049 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:10 compute-0 nova_compute[185191]: 2026-01-27 15:12:10.253 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:14 compute-0 podman[239430]: 2026-01-27 15:12:14.733129427 +0000 UTC m=+0.060649756 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:12:15 compute-0 nova_compute[185191]: 2026-01-27 15:12:15.053 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:15 compute-0 nova_compute[185191]: 2026-01-27 15:12:15.256 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:18 compute-0 nova_compute[185191]: 2026-01-27 15:12:18.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:12:18 compute-0 nova_compute[185191]: 2026-01-27 15:12:18.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:12:18 compute-0 nova_compute[185191]: 2026-01-27 15:12:18.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:12:18 compute-0 nova_compute[185191]: 2026-01-27 15:12:18.998 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:12:18 compute-0 nova_compute[185191]: 2026-01-27 15:12:18.999 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.000 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.000 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.200 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.269 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.270 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.330 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.332 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.394 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.395 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.452 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.460 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.520 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.521 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.580 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.581 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.644 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.645 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:12:19 compute-0 nova_compute[185191]: 2026-01-27 15:12:19.709 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:12:20 compute-0 nova_compute[185191]: 2026-01-27 15:12:20.048 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:12:20 compute-0 nova_compute[185191]: 2026-01-27 15:12:20.050 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5102MB free_disk=72.42438507080078GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:12:20 compute-0 nova_compute[185191]: 2026-01-27 15:12:20.051 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:12:20 compute-0 nova_compute[185191]: 2026-01-27 15:12:20.052 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:12:20 compute-0 nova_compute[185191]: 2026-01-27 15:12:20.056 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:20 compute-0 nova_compute[185191]: 2026-01-27 15:12:20.258 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:20 compute-0 nova_compute[185191]: 2026-01-27 15:12:20.311 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:12:20 compute-0 nova_compute[185191]: 2026-01-27 15:12:20.312 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance b98b01bd-8dfe-4188-be2f-ebffe0bd1717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:12:20 compute-0 nova_compute[185191]: 2026-01-27 15:12:20.313 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:12:20 compute-0 nova_compute[185191]: 2026-01-27 15:12:20.313 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:12:20 compute-0 nova_compute[185191]: 2026-01-27 15:12:20.404 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:12:20 compute-0 nova_compute[185191]: 2026-01-27 15:12:20.446 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:12:20 compute-0 nova_compute[185191]: 2026-01-27 15:12:20.539 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:12:20 compute-0 nova_compute[185191]: 2026-01-27 15:12:20.540 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:12:21 compute-0 nova_compute[185191]: 2026-01-27 15:12:21.536 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:12:21 compute-0 nova_compute[185191]: 2026-01-27 15:12:21.646 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:12:21 compute-0 nova_compute[185191]: 2026-01-27 15:12:21.647 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:12:21 compute-0 nova_compute[185191]: 2026-01-27 15:12:21.647 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:12:22 compute-0 ovn_controller[97541]: 2026-01-27T15:12:22Z|00039|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Jan 27 15:12:22 compute-0 podman[239480]: 2026-01-27 15:12:22.317246417 +0000 UTC m=+0.070811667 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 15:12:22 compute-0 nova_compute[185191]: 2026-01-27 15:12:22.450 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:12:22 compute-0 nova_compute[185191]: 2026-01-27 15:12:22.451 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:12:22 compute-0 nova_compute[185191]: 2026-01-27 15:12:22.452 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:12:22 compute-0 nova_compute[185191]: 2026-01-27 15:12:22.452 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:12:24 compute-0 nova_compute[185191]: 2026-01-27 15:12:24.480 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updating instance_info_cache with network_info: [{"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:12:24 compute-0 nova_compute[185191]: 2026-01-27 15:12:24.533 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:12:24 compute-0 nova_compute[185191]: 2026-01-27 15:12:24.533 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:12:24 compute-0 nova_compute[185191]: 2026-01-27 15:12:24.534 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:12:24 compute-0 nova_compute[185191]: 2026-01-27 15:12:24.534 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:12:24 compute-0 nova_compute[185191]: 2026-01-27 15:12:24.535 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:12:24 compute-0 nova_compute[185191]: 2026-01-27 15:12:24.535 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:12:24 compute-0 nova_compute[185191]: 2026-01-27 15:12:24.938 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:12:25 compute-0 nova_compute[185191]: 2026-01-27 15:12:25.058 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:25 compute-0 nova_compute[185191]: 2026-01-27 15:12:25.260 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:25 compute-0 podman[239498]: 2026-01-27 15:12:25.344987657 +0000 UTC m=+0.091261891 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052)
Jan 27 15:12:25 compute-0 podman[239502]: 2026-01-27 15:12:25.378977783 +0000 UTC m=+0.110347830 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, io.buildah.version=1.33.7)
Jan 27 15:12:25 compute-0 podman[239499]: 2026-01-27 15:12:25.387708505 +0000 UTC m=+0.127348292 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 15:12:26 compute-0 nova_compute[185191]: 2026-01-27 15:12:26.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:12:27 compute-0 ovn_controller[97541]: 2026-01-27T15:12:27Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:86:b3 192.168.0.246
Jan 27 15:12:27 compute-0 ovn_controller[97541]: 2026-01-27T15:12:27Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:86:b3 192.168.0.246
Jan 27 15:12:29 compute-0 podman[201073]: time="2026-01-27T15:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:12:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:12:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4368 "" "Go-http-client/1.1"
Jan 27 15:12:30 compute-0 nova_compute[185191]: 2026-01-27 15:12:30.060 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:30 compute-0 nova_compute[185191]: 2026-01-27 15:12:30.262 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:31 compute-0 openstack_network_exporter[204239]: ERROR   15:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:12:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:12:31 compute-0 openstack_network_exporter[204239]: ERROR   15:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:12:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:12:34 compute-0 podman[239575]: 2026-01-27 15:12:34.305074731 +0000 UTC m=+0.065021432 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 15:12:35 compute-0 nova_compute[185191]: 2026-01-27 15:12:35.064 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:35 compute-0 nova_compute[185191]: 2026-01-27 15:12:35.264 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:39 compute-0 podman[239595]: 2026-01-27 15:12:39.320608412 +0000 UTC m=+0.064857348 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:12:39 compute-0 podman[239594]: 2026-01-27 15:12:39.328351078 +0000 UTC m=+0.070286532 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.buildah.version=1.29.0, architecture=x86_64, name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=kepler, managed_by=edpm_ansible, config_id=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=)
Jan 27 15:12:40 compute-0 nova_compute[185191]: 2026-01-27 15:12:40.067 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:40 compute-0 nova_compute[185191]: 2026-01-27 15:12:40.266 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:45 compute-0 nova_compute[185191]: 2026-01-27 15:12:45.069 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:45 compute-0 nova_compute[185191]: 2026-01-27 15:12:45.268 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:45 compute-0 podman[239636]: 2026-01-27 15:12:45.32931938 +0000 UTC m=+0.082578200 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:12:50 compute-0 nova_compute[185191]: 2026-01-27 15:12:50.072 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:50 compute-0 nova_compute[185191]: 2026-01-27 15:12:50.272 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:53 compute-0 podman[239662]: 2026-01-27 15:12:53.332841423 +0000 UTC m=+0.080811303 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Jan 27 15:12:55 compute-0 nova_compute[185191]: 2026-01-27 15:12:55.076 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:55 compute-0 nova_compute[185191]: 2026-01-27 15:12:55.274 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:12:56 compute-0 podman[239680]: 2026-01-27 15:12:56.323150968 +0000 UTC m=+0.077673660 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 27 15:12:56 compute-0 podman[239682]: 2026-01-27 15:12:56.332799505 +0000 UTC m=+0.077612658 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public)
Jan 27 15:12:56 compute-0 podman[239681]: 2026-01-27 15:12:56.379051596 +0000 UTC m=+0.128624576 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 15:12:59 compute-0 podman[201073]: time="2026-01-27T15:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:12:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:12:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Jan 27 15:13:00 compute-0 nova_compute[185191]: 2026-01-27 15:13:00.078 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:13:00.223 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:13:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:13:00.224 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:13:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:13:00.225 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:13:00 compute-0 nova_compute[185191]: 2026-01-27 15:13:00.275 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:01 compute-0 openstack_network_exporter[204239]: ERROR   15:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:13:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:13:01 compute-0 openstack_network_exporter[204239]: ERROR   15:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:13:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:13:05 compute-0 nova_compute[185191]: 2026-01-27 15:13:05.081 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:05 compute-0 nova_compute[185191]: 2026-01-27 15:13:05.276 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:05 compute-0 podman[239745]: 2026-01-27 15:13:05.313294734 +0000 UTC m=+0.064726655 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi)
Jan 27 15:13:07 compute-0 sshd-session[239764]: Invalid user solv from 2.57.122.238 port 58504
Jan 27 15:13:07 compute-0 sshd-session[239764]: Connection closed by invalid user solv 2.57.122.238 port 58504 [preauth]
Jan 27 15:13:10 compute-0 nova_compute[185191]: 2026-01-27 15:13:10.084 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:10 compute-0 nova_compute[185191]: 2026-01-27 15:13:10.277 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:10 compute-0 podman[239767]: 2026-01-27 15:13:10.312453685 +0000 UTC m=+0.059258129 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:13:10 compute-0 podman[239766]: 2026-01-27 15:13:10.317605473 +0000 UTC m=+0.067727715 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_id=kepler, vendor=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release=1214.1726694543, distribution-scope=public, io.openshift.tags=base rhel9, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, release-0.7.12=)
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.984 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.985 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.992 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8c4af6eb-340b-477f-83d2-11aa7ab0b9d3', 'name': 'test_0', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.994 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b98b01bd-8dfe-4188-be2f-ebffe0bd1717 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 15:13:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:10.997 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b98b01bd-8dfe-4188-be2f-ebffe0bd1717 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82c957adbc17ae7d91b95e243ef95edcae050b803dbf40e883e7549d3d32b40a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.464 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Tue, 27 Jan 2026 15:13:11 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-846ae296-f41c-425c-a600-30b4e55230ad x-openstack-request-id: req-846ae296-f41c-425c-a600-30b4e55230ad _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.465 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b98b01bd-8dfe-4188-be2f-ebffe0bd1717", "name": "vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4", "status": "ACTIVE", "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "user_id": "24260fb24da44b10b598f9c822c026b8", "metadata": {"metering.server_group": "92e45285-9077-420c-bb23-df5c16dca6b3"}, "hostId": "3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb", "image": {"id": "2b336e4b-c98e-4b97-9f8f-b3290e6b6caf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/2b336e4b-c98e-4b97-9f8f-b3290e6b6caf"}]}, "flavor": {"id": "26a24ace-a5af-47b3-9314-7d2b9e74c6b8", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/26a24ace-a5af-47b3-9314-7d2b9e74c6b8"}]}, "created": "2026-01-27T15:11:41Z", "updated": "2026-01-27T15:11:52Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.246", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f3:86:b3"}, {"version": 4, "addr": "192.168.122.238", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f3:86:b3"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b98b01bd-8dfe-4188-be2f-ebffe0bd1717"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b98b01bd-8dfe-4188-be2f-ebffe0bd1717"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-27T15:11:52.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.465 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b98b01bd-8dfe-4188-be2f-ebffe0bd1717 used request id req-846ae296-f41c-425c-a600-30b4e55230ad request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.466 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b98b01bd-8dfe-4188-be2f-ebffe0bd1717', 'name': 'vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {'metering.server_group': '92e45285-9077-420c-bb23-df5c16dca6b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.466 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.466 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.467 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.467 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:13:12.467098) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.549 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 3771884583 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.550 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 11291751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.551 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.625 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.latency volume: 1660594359 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.626 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.latency volume: 11551078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.626 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.626 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.627 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.627 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.627 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.627 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.627 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.627 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.627 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:13:12.627370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.628 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.628 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.628 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.628 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.629 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.629 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.629 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.629 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.629 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.630 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:13:12.629673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.659 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.659 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.659 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.679 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.679 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.680 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.681 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.681 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.681 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.681 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.681 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.681 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.682 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:13:12.681565) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.684 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.688 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b98b01bd-8dfe-4188-be2f-ebffe0bd1717 / tap62a0d85c-d2 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.688 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.688 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.688 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.689 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:13:12.689014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.689 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.689 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.689 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.689 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.689 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.690 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.690 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.690 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.690 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.691 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.691 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.691 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.691 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:13:12.690080) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.691 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.692 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.692 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.692 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.692 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.692 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.692 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:13:12.691403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.693 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:13:12.692775) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.724 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/cpu volume: 36110000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.754 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/cpu volume: 35010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.755 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.755 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.755 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.755 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.755 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.755 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.756 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.756 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.756 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.756 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.756 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.756 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.757 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.757 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.757 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.758 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.758 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.758 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.758 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:13:12.755741) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.758 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/memory.usage volume: 49.09765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:13:12.756963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:13:12.758229) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.758 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/memory.usage volume: 49.078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.758 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.759 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.759 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.759 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.759 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.759 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.759 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.760 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.760 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.760 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.760 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.760 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.760 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.760 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.760 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:13:12.759390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.761 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4>]
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.761 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T15:13:12.760634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.761 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.761 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.761 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.761 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.761 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.761 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.762 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:13:12.761691) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.762 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.762 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.762 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.762 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.763 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.763 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.763 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.763 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.763 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.764 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:13:12.763204) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.764 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.764 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.764 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.764 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.764 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.764 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.764 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.765 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.bytes volume: 2146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.765 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.765 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.765 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.766 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.766 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:13:12.764828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.766 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.766 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.766 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.766 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.767 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:13:12.766406) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.767 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.767 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.767 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.768 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.768 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.768 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.768 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.768 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.768 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.769 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.769 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.769 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.769 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.770 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.770 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.770 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.770 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:13:12.769062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.770 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.770 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.770 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.771 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:13:12.770620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.771 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.771 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.771 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.771 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.772 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.772 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.772 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.772 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.772 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.772 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.773 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.773 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes.delta volume: 465 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.773 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:13:12.772998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.773 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.773 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.773 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.773 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.773 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.774 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.774 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.774 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 1242591197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.774 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:13:12.774098) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.774 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 114890665 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.774 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 113913681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.774 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.latency volume: 692341321 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.775 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.latency volume: 100159582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.775 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.latency volume: 227319301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.775 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.775 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.775 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.775 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.776 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.776 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.776 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4>]
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.776 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T15:13:12.776022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.776 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.776 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.776 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.776 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.777 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.777 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.777 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.777 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.778 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:13:12.776900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.778 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.778 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.778 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.779 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.779 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.779 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.779 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.779 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.779 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.780 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:13:12.779435) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.780 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.780 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.780 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.781 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.781 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.781 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.781 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.781 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.781 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.781 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.782 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.782 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:13:12.781865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.782 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.783 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.783 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.783 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.783 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.783 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.783 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.783 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.783 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.784 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.784 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.784 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.785 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:13:12.783224) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:13:12.786 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:13:15 compute-0 nova_compute[185191]: 2026-01-27 15:13:15.086 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:15 compute-0 nova_compute[185191]: 2026-01-27 15:13:15.279 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:16 compute-0 podman[239809]: 2026-01-27 15:13:16.350575424 +0000 UTC m=+0.104198934 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:13:20 compute-0 nova_compute[185191]: 2026-01-27 15:13:20.088 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:20 compute-0 nova_compute[185191]: 2026-01-27 15:13:20.282 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:20 compute-0 nova_compute[185191]: 2026-01-27 15:13:20.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:13:20 compute-0 nova_compute[185191]: 2026-01-27 15:13:20.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:13:20 compute-0 nova_compute[185191]: 2026-01-27 15:13:20.946 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:13:20 compute-0 nova_compute[185191]: 2026-01-27 15:13:20.946 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:13:20 compute-0 nova_compute[185191]: 2026-01-27 15:13:20.946 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:13:21 compute-0 nova_compute[185191]: 2026-01-27 15:13:21.151 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:13:21 compute-0 nova_compute[185191]: 2026-01-27 15:13:21.152 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:13:21 compute-0 nova_compute[185191]: 2026-01-27 15:13:21.152 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:13:21 compute-0 nova_compute[185191]: 2026-01-27 15:13:21.152 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:13:21 compute-0 nova_compute[185191]: 2026-01-27 15:13:21.616 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:13:21 compute-0 nova_compute[185191]: 2026-01-27 15:13:21.682 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:13:21 compute-0 nova_compute[185191]: 2026-01-27 15:13:21.682 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:13:21 compute-0 nova_compute[185191]: 2026-01-27 15:13:21.739 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:13:21 compute-0 nova_compute[185191]: 2026-01-27 15:13:21.740 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:13:21 compute-0 nova_compute[185191]: 2026-01-27 15:13:21.819 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:13:21 compute-0 nova_compute[185191]: 2026-01-27 15:13:21.820 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:13:21 compute-0 nova_compute[185191]: 2026-01-27 15:13:21.889 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:13:21 compute-0 nova_compute[185191]: 2026-01-27 15:13:21.899 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:13:21 compute-0 nova_compute[185191]: 2026-01-27 15:13:21.970 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:13:21 compute-0 nova_compute[185191]: 2026-01-27 15:13:21.971 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.034 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.035 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.101 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.102 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.176 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.537 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.539 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5055MB free_disk=72.40110778808594GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.539 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.539 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.834 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.835 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance b98b01bd-8dfe-4188-be2f-ebffe0bd1717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.835 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.835 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.897 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.976 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.977 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:13:22 compute-0 nova_compute[185191]: 2026-01-27 15:13:22.978 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:13:23 compute-0 nova_compute[185191]: 2026-01-27 15:13:23.973 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:13:23 compute-0 nova_compute[185191]: 2026-01-27 15:13:23.974 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:13:23 compute-0 nova_compute[185191]: 2026-01-27 15:13:23.974 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:13:24 compute-0 podman[239856]: 2026-01-27 15:13:24.357774973 +0000 UTC m=+0.108270094 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:13:24 compute-0 nova_compute[185191]: 2026-01-27 15:13:24.474 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:13:24 compute-0 nova_compute[185191]: 2026-01-27 15:13:24.474 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:13:24 compute-0 nova_compute[185191]: 2026-01-27 15:13:24.474 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:13:25 compute-0 nova_compute[185191]: 2026-01-27 15:13:25.090 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:25 compute-0 nova_compute[185191]: 2026-01-27 15:13:25.283 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:25 compute-0 nova_compute[185191]: 2026-01-27 15:13:25.914 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Updating instance_info_cache with network_info: [{"id": "62a0d85c-d24f-4ada-af0a-2b902803778f", "address": "fa:16:3e:f3:86:b3", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.246", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a0d85c-d2", "ovs_interfaceid": "62a0d85c-d24f-4ada-af0a-2b902803778f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:13:26 compute-0 nova_compute[185191]: 2026-01-27 15:13:26.206 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:13:26 compute-0 nova_compute[185191]: 2026-01-27 15:13:26.206 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:13:26 compute-0 nova_compute[185191]: 2026-01-27 15:13:26.208 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:13:26 compute-0 nova_compute[185191]: 2026-01-27 15:13:26.209 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:13:26 compute-0 nova_compute[185191]: 2026-01-27 15:13:26.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:13:27 compute-0 podman[239877]: 2026-01-27 15:13:27.328045969 +0000 UTC m=+0.075032249 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41)
Jan 27 15:13:27 compute-0 podman[239875]: 2026-01-27 15:13:27.341968054 +0000 UTC m=+0.095482270 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 15:13:27 compute-0 podman[239876]: 2026-01-27 15:13:27.391880126 +0000 UTC m=+0.142411921 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 27 15:13:29 compute-0 podman[201073]: time="2026-01-27T15:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:13:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:13:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4371 "" "Go-http-client/1.1"
Jan 27 15:13:30 compute-0 nova_compute[185191]: 2026-01-27 15:13:30.093 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:30 compute-0 nova_compute[185191]: 2026-01-27 15:13:30.286 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:31 compute-0 openstack_network_exporter[204239]: ERROR   15:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:13:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:13:31 compute-0 openstack_network_exporter[204239]: ERROR   15:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:13:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:13:35 compute-0 nova_compute[185191]: 2026-01-27 15:13:35.096 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:35 compute-0 nova_compute[185191]: 2026-01-27 15:13:35.290 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:36 compute-0 podman[239943]: 2026-01-27 15:13:36.312227032 +0000 UTC m=+0.064006703 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:13:40 compute-0 nova_compute[185191]: 2026-01-27 15:13:40.098 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:40 compute-0 nova_compute[185191]: 2026-01-27 15:13:40.293 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:41 compute-0 podman[239964]: 2026-01-27 15:13:41.322208522 +0000 UTC m=+0.067848896 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:13:41 compute-0 podman[239963]: 2026-01-27 15:13:41.332022966 +0000 UTC m=+0.088094571 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, container_name=kepler, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, io.buildah.version=1.29.0, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public)
Jan 27 15:13:45 compute-0 nova_compute[185191]: 2026-01-27 15:13:45.102 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:45 compute-0 nova_compute[185191]: 2026-01-27 15:13:45.296 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:47 compute-0 podman[240007]: 2026-01-27 15:13:47.343178209 +0000 UTC m=+0.097160235 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:13:50 compute-0 nova_compute[185191]: 2026-01-27 15:13:50.105 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:50 compute-0 nova_compute[185191]: 2026-01-27 15:13:50.299 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:52 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 15:13:55 compute-0 nova_compute[185191]: 2026-01-27 15:13:55.107 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:55 compute-0 nova_compute[185191]: 2026-01-27 15:13:55.299 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:13:55 compute-0 podman[240033]: 2026-01-27 15:13:55.332608734 +0000 UTC m=+0.094096222 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 15:13:58 compute-0 podman[240054]: 2026-01-27 15:13:58.333800983 +0000 UTC m=+0.073048666 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350)
Jan 27 15:13:58 compute-0 podman[240052]: 2026-01-27 15:13:58.348104128 +0000 UTC m=+0.091785760 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052)
Jan 27 15:13:58 compute-0 podman[240053]: 2026-01-27 15:13:58.359555716 +0000 UTC m=+0.103921766 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 15:13:59 compute-0 podman[201073]: time="2026-01-27T15:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:13:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:13:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4372 "" "Go-http-client/1.1"
Jan 27 15:14:00 compute-0 nova_compute[185191]: 2026-01-27 15:14:00.110 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:14:00.223 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:14:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:14:00.224 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:14:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:14:00.225 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:14:00 compute-0 nova_compute[185191]: 2026-01-27 15:14:00.302 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:01 compute-0 openstack_network_exporter[204239]: ERROR   15:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:14:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:14:01 compute-0 openstack_network_exporter[204239]: ERROR   15:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:14:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:14:05 compute-0 nova_compute[185191]: 2026-01-27 15:14:05.113 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:05 compute-0 nova_compute[185191]: 2026-01-27 15:14:05.305 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:07 compute-0 podman[240115]: 2026-01-27 15:14:07.343044073 +0000 UTC m=+0.086695173 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 15:14:10 compute-0 nova_compute[185191]: 2026-01-27 15:14:10.116 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:10 compute-0 nova_compute[185191]: 2026-01-27 15:14:10.307 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:12 compute-0 podman[240135]: 2026-01-27 15:14:12.30381152 +0000 UTC m=+0.064823475 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.tags=base rhel9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, version=9.4, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.component=ubi9-container)
Jan 27 15:14:12 compute-0 podman[240136]: 2026-01-27 15:14:12.322283237 +0000 UTC m=+0.078938294 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:14:15 compute-0 nova_compute[185191]: 2026-01-27 15:14:15.118 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:15 compute-0 nova_compute[185191]: 2026-01-27 15:14:15.308 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:18 compute-0 podman[240181]: 2026-01-27 15:14:18.316173797 +0000 UTC m=+0.069924432 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:14:20 compute-0 nova_compute[185191]: 2026-01-27 15:14:20.121 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:20 compute-0 nova_compute[185191]: 2026-01-27 15:14:20.311 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:22 compute-0 nova_compute[185191]: 2026-01-27 15:14:22.938 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:14:22 compute-0 nova_compute[185191]: 2026-01-27 15:14:22.942 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:14:22 compute-0 nova_compute[185191]: 2026-01-27 15:14:22.980 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:14:22 compute-0 nova_compute[185191]: 2026-01-27 15:14:22.981 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:14:22 compute-0 nova_compute[185191]: 2026-01-27 15:14:22.982 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:14:23 compute-0 nova_compute[185191]: 2026-01-27 15:14:23.538 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:14:23 compute-0 nova_compute[185191]: 2026-01-27 15:14:23.539 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:14:23 compute-0 nova_compute[185191]: 2026-01-27 15:14:23.541 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:14:23 compute-0 nova_compute[185191]: 2026-01-27 15:14:23.543 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.124 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.312 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.563 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updating instance_info_cache with network_info: [{"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.587 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.588 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.588 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.589 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.589 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.590 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.591 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.591 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.592 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.623 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.624 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.624 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.625 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.727 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.787 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.789 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.851 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.852 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.912 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.914 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.974 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:14:25 compute-0 nova_compute[185191]: 2026-01-27 15:14:25.985 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:14:26 compute-0 nova_compute[185191]: 2026-01-27 15:14:26.051 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:14:26 compute-0 nova_compute[185191]: 2026-01-27 15:14:26.052 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:14:26 compute-0 nova_compute[185191]: 2026-01-27 15:14:26.121 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:14:26 compute-0 nova_compute[185191]: 2026-01-27 15:14:26.124 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:14:26 compute-0 nova_compute[185191]: 2026-01-27 15:14:26.194 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:14:26 compute-0 nova_compute[185191]: 2026-01-27 15:14:26.196 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:14:26 compute-0 nova_compute[185191]: 2026-01-27 15:14:26.266 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:14:26 compute-0 podman[240229]: 2026-01-27 15:14:26.323491013 +0000 UTC m=+0.080999890 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:14:26 compute-0 nova_compute[185191]: 2026-01-27 15:14:26.779 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:14:26 compute-0 nova_compute[185191]: 2026-01-27 15:14:26.781 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5055MB free_disk=72.40108871459961GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:14:26 compute-0 nova_compute[185191]: 2026-01-27 15:14:26.781 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:14:26 compute-0 nova_compute[185191]: 2026-01-27 15:14:26.782 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:14:26 compute-0 nova_compute[185191]: 2026-01-27 15:14:26.935 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:14:26 compute-0 nova_compute[185191]: 2026-01-27 15:14:26.936 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance b98b01bd-8dfe-4188-be2f-ebffe0bd1717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:14:26 compute-0 nova_compute[185191]: 2026-01-27 15:14:26.937 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:14:26 compute-0 nova_compute[185191]: 2026-01-27 15:14:26.937 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:14:27 compute-0 nova_compute[185191]: 2026-01-27 15:14:27.061 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:14:27 compute-0 nova_compute[185191]: 2026-01-27 15:14:27.115 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:14:27 compute-0 nova_compute[185191]: 2026-01-27 15:14:27.117 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:14:27 compute-0 nova_compute[185191]: 2026-01-27 15:14:27.117 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:14:29 compute-0 podman[240248]: 2026-01-27 15:14:29.324827755 +0000 UTC m=+0.077594048 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Jan 27 15:14:29 compute-0 podman[240250]: 2026-01-27 15:14:29.327459446 +0000 UTC m=+0.074401773 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.expose-services=, release=1755695350, com.redhat.component=ubi9-minimal-container, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:14:29 compute-0 podman[240249]: 2026-01-27 15:14:29.369010274 +0000 UTC m=+0.120682608 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 15:14:29 compute-0 nova_compute[185191]: 2026-01-27 15:14:29.472 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:14:29 compute-0 podman[201073]: time="2026-01-27T15:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:14:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:14:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4371 "" "Go-http-client/1.1"
Jan 27 15:14:30 compute-0 nova_compute[185191]: 2026-01-27 15:14:30.128 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:30 compute-0 nova_compute[185191]: 2026-01-27 15:14:30.314 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:31 compute-0 openstack_network_exporter[204239]: ERROR   15:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:14:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:14:31 compute-0 openstack_network_exporter[204239]: ERROR   15:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:14:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:14:35 compute-0 nova_compute[185191]: 2026-01-27 15:14:35.131 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:35 compute-0 nova_compute[185191]: 2026-01-27 15:14:35.316 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:38 compute-0 podman[240307]: 2026-01-27 15:14:38.315162617 +0000 UTC m=+0.071314529 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 15:14:40 compute-0 nova_compute[185191]: 2026-01-27 15:14:40.133 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:40 compute-0 nova_compute[185191]: 2026-01-27 15:14:40.319 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:43 compute-0 podman[240328]: 2026-01-27 15:14:43.314280735 +0000 UTC m=+0.071481394 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:14:43 compute-0 podman[240327]: 2026-01-27 15:14:43.323573635 +0000 UTC m=+0.083851027 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, com.redhat.component=ubi9-container, vcs-type=git, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9)
Jan 27 15:14:45 compute-0 nova_compute[185191]: 2026-01-27 15:14:45.136 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:45 compute-0 nova_compute[185191]: 2026-01-27 15:14:45.321 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:49 compute-0 podman[240368]: 2026-01-27 15:14:49.328158114 +0000 UTC m=+0.079021117 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:14:50 compute-0 nova_compute[185191]: 2026-01-27 15:14:50.139 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:50 compute-0 nova_compute[185191]: 2026-01-27 15:14:50.324 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:55 compute-0 nova_compute[185191]: 2026-01-27 15:14:55.143 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:55 compute-0 nova_compute[185191]: 2026-01-27 15:14:55.325 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:14:57 compute-0 podman[240394]: 2026-01-27 15:14:57.339938009 +0000 UTC m=+0.100344470 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 15:14:59 compute-0 podman[201073]: time="2026-01-27T15:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:14:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:14:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 27 15:15:00 compute-0 nova_compute[185191]: 2026-01-27 15:15:00.146 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:15:00.225 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:15:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:15:00.225 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:15:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:15:00.226 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:15:00 compute-0 podman[240412]: 2026-01-27 15:15:00.314604676 +0000 UTC m=+0.065788782 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 27 15:15:00 compute-0 nova_compute[185191]: 2026-01-27 15:15:00.326 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:00 compute-0 podman[240414]: 2026-01-27 15:15:00.349038822 +0000 UTC m=+0.092033237 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, config_id=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 27 15:15:00 compute-0 podman[240413]: 2026-01-27 15:15:00.395974764 +0000 UTC m=+0.143455480 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 15:15:01 compute-0 openstack_network_exporter[204239]: ERROR   15:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:15:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:15:01 compute-0 openstack_network_exporter[204239]: ERROR   15:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:15:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:15:05 compute-0 nova_compute[185191]: 2026-01-27 15:15:05.149 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:05 compute-0 nova_compute[185191]: 2026-01-27 15:15:05.328 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:09 compute-0 podman[240475]: 2026-01-27 15:15:09.390262481 +0000 UTC m=+0.120994126 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 27 15:15:10 compute-0 nova_compute[185191]: 2026-01-27 15:15:10.155 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:10 compute-0 nova_compute[185191]: 2026-01-27 15:15:10.331 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.985 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.985 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.998 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8c4af6eb-340b-477f-83d2-11aa7ab0b9d3', 'name': 'test_0', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.004 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b98b01bd-8dfe-4188-be2f-ebffe0bd1717', 'name': 'vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {'metering.server_group': '92e45285-9077-420c-bb23-df5c16dca6b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.004 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.004 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.004 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.006 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:15:11.004596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.069 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 3771884583 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.070 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 11291751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.071 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.142 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.latency volume: 1674942485 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.143 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.latency volume: 11551078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.143 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.145 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.145 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.145 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.145 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.146 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.146 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:15:11.146157) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.147 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.147 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.148 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.148 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.149 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.149 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.150 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.150 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.150 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.150 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.150 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.151 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:15:11.150917) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.182 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.182 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.183 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.208 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.209 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.209 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.210 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.210 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.210 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.210 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.210 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.210 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:15:11.210793) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.216 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.220 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.220 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.220 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.220 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.220 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.220 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.221 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.221 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.221 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.221 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.221 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.221 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.221 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.222 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.222 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.222 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.222 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.222 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.222 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.222 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.223 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.223 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.223 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:15:11.220981) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.223 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.223 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:15:11.221899) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.223 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.223 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:15:11.222724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.224 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.224 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.224 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.224 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:15:11.224135) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.254 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/cpu volume: 37470000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.286 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/cpu volume: 130850000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.287 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.287 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.287 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.288 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.288 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.288 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.288 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.288 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.288 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.289 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.289 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.289 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.289 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.289 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.packets volume: 43 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.290 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.290 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.290 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:15:11.288082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.290 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:15:11.289205) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:15:11.290864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.290 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.291 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/memory.usage volume: 49.09765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.291 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/memory.usage volume: 49.16015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.291 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.292 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.292 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.292 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.292 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.292 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.293 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:15:11.292529) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.293 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.293 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.293 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.294 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.294 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.294 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.294 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.294 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:15:11.294104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.295 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.295 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.295 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.295 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.295 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.295 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.bytes.delta volume: 3363 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.296 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.296 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.296 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.296 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.297 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.297 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.bytes volume: 4892 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:15:11.295501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:15:11.296867) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.298 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.298 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.298 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.298 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.298 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.299 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:15:11.298250) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.299 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.299 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.300 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.300 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.300 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.300 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.300 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.300 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.301 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.301 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.301 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.301 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:15:11.300711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.302 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.302 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.302 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.303 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:15:11.302174) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.303 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.303 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.304 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.304 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.304 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.304 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.304 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.304 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:15:11.304557) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.305 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.bytes.delta volume: 2746 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.305 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.305 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.305 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.305 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.305 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.306 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 1242591197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.306 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 114890665 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:15:11.305962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.306 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 113913681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.306 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.latency volume: 697193681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.307 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.latency volume: 100159582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.307 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.latency volume: 227319301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.307 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.307 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.308 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.308 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.308 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.308 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.308 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.308 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:15:11.308119) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.309 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.309 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.309 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.309 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.309 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.309 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.310 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.310 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.310 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.310 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.310 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.311 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.311 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.311 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:15:11.310057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.312 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.312 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.312 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.312 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.312 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.313 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.313 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.313 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.313 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.314 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.314 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:15:11.312136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.314 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:15:11.313811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.314 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.314 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.314 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.314 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.315 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:15:11.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:15:13 compute-0 nova_compute[185191]: 2026-01-27 15:15:13.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:15:14 compute-0 podman[240495]: 2026-01-27 15:15:14.385719541 +0000 UTC m=+0.126948046 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.4, container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, distribution-scope=public, io.buildah.version=1.29.0, vcs-type=git, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Jan 27 15:15:14 compute-0 podman[240496]: 2026-01-27 15:15:14.389818241 +0000 UTC m=+0.118093428 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:15:15 compute-0 nova_compute[185191]: 2026-01-27 15:15:15.160 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:15 compute-0 nova_compute[185191]: 2026-01-27 15:15:15.334 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:17 compute-0 sshd-session[240536]: Invalid user solv from 2.57.122.238 port 48084
Jan 27 15:15:17 compute-0 sshd-session[240536]: Connection closed by invalid user solv 2.57.122.238 port 48084 [preauth]
Jan 27 15:15:18 compute-0 nova_compute[185191]: 2026-01-27 15:15:18.014 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:15:18 compute-0 nova_compute[185191]: 2026-01-27 15:15:18.014 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 15:15:20 compute-0 nova_compute[185191]: 2026-01-27 15:15:20.166 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:20 compute-0 nova_compute[185191]: 2026-01-27 15:15:20.340 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:20 compute-0 podman[240539]: 2026-01-27 15:15:20.377173115 +0000 UTC m=+0.116394982 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:15:21 compute-0 nova_compute[185191]: 2026-01-27 15:15:21.448 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:15:21 compute-0 nova_compute[185191]: 2026-01-27 15:15:21.449 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 15:15:21 compute-0 nova_compute[185191]: 2026-01-27 15:15:21.492 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 15:15:22 compute-0 nova_compute[185191]: 2026-01-27 15:15:22.988 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:15:22 compute-0 nova_compute[185191]: 2026-01-27 15:15:22.990 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:15:22 compute-0 nova_compute[185191]: 2026-01-27 15:15:22.990 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:15:23 compute-0 nova_compute[185191]: 2026-01-27 15:15:23.940 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:15:23 compute-0 nova_compute[185191]: 2026-01-27 15:15:23.942 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:15:23 compute-0 nova_compute[185191]: 2026-01-27 15:15:23.999 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.000 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.000 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.001 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.189 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.263 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.265 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.347 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.350 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.411 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.412 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.472 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.484 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.550 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.556 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.617 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.619 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.680 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.681 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:15:24 compute-0 nova_compute[185191]: 2026-01-27 15:15:24.746 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:15:25 compute-0 nova_compute[185191]: 2026-01-27 15:15:25.111 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:15:25 compute-0 nova_compute[185191]: 2026-01-27 15:15:25.114 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5059MB free_disk=72.40011215209961GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:15:25 compute-0 nova_compute[185191]: 2026-01-27 15:15:25.115 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:15:25 compute-0 nova_compute[185191]: 2026-01-27 15:15:25.116 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:15:25 compute-0 nova_compute[185191]: 2026-01-27 15:15:25.168 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:25 compute-0 nova_compute[185191]: 2026-01-27 15:15:25.347 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:25 compute-0 nova_compute[185191]: 2026-01-27 15:15:25.485 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:15:25 compute-0 nova_compute[185191]: 2026-01-27 15:15:25.486 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance b98b01bd-8dfe-4188-be2f-ebffe0bd1717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:15:25 compute-0 nova_compute[185191]: 2026-01-27 15:15:25.486 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:15:25 compute-0 nova_compute[185191]: 2026-01-27 15:15:25.486 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:15:25 compute-0 nova_compute[185191]: 2026-01-27 15:15:25.693 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing inventories for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 15:15:25 compute-0 nova_compute[185191]: 2026-01-27 15:15:25.969 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating ProviderTree inventory for provider dbf037fd-3291-487b-ae9c-69178dae2ebc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 15:15:25 compute-0 nova_compute[185191]: 2026-01-27 15:15:25.970 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 15:15:26 compute-0 nova_compute[185191]: 2026-01-27 15:15:26.039 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing aggregate associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 15:15:26 compute-0 nova_compute[185191]: 2026-01-27 15:15:26.195 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing trait associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, traits: HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_ACCELERATORS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 15:15:26 compute-0 nova_compute[185191]: 2026-01-27 15:15:26.519 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:15:26 compute-0 nova_compute[185191]: 2026-01-27 15:15:26.609 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:15:26 compute-0 nova_compute[185191]: 2026-01-27 15:15:26.612 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:15:26 compute-0 nova_compute[185191]: 2026-01-27 15:15:26.612 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:15:27 compute-0 nova_compute[185191]: 2026-01-27 15:15:27.614 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:15:27 compute-0 nova_compute[185191]: 2026-01-27 15:15:27.616 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:15:28 compute-0 podman[240585]: 2026-01-27 15:15:28.335343208 +0000 UTC m=+0.089501702 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 15:15:28 compute-0 nova_compute[185191]: 2026-01-27 15:15:28.744 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:15:28 compute-0 nova_compute[185191]: 2026-01-27 15:15:28.746 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:15:28 compute-0 nova_compute[185191]: 2026-01-27 15:15:28.746 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:15:29 compute-0 podman[201073]: time="2026-01-27T15:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:15:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:15:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 27 15:15:30 compute-0 nova_compute[185191]: 2026-01-27 15:15:30.173 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:30 compute-0 nova_compute[185191]: 2026-01-27 15:15:30.340 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:30 compute-0 nova_compute[185191]: 2026-01-27 15:15:30.910 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Updating instance_info_cache with network_info: [{"id": "62a0d85c-d24f-4ada-af0a-2b902803778f", "address": "fa:16:3e:f3:86:b3", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.246", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a0d85c-d2", "ovs_interfaceid": "62a0d85c-d24f-4ada-af0a-2b902803778f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:15:30 compute-0 nova_compute[185191]: 2026-01-27 15:15:30.954 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:15:30 compute-0 nova_compute[185191]: 2026-01-27 15:15:30.955 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:15:30 compute-0 nova_compute[185191]: 2026-01-27 15:15:30.956 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:15:30 compute-0 nova_compute[185191]: 2026-01-27 15:15:30.956 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:15:30 compute-0 nova_compute[185191]: 2026-01-27 15:15:30.957 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:15:30 compute-0 nova_compute[185191]: 2026-01-27 15:15:30.957 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:15:31 compute-0 podman[240606]: 2026-01-27 15:15:31.374885714 +0000 UTC m=+0.114734062 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, version=9.6, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Jan 27 15:15:31 compute-0 podman[240604]: 2026-01-27 15:15:31.381287486 +0000 UTC m=+0.130838285 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 15:15:31 compute-0 podman[240605]: 2026-01-27 15:15:31.396061444 +0000 UTC m=+0.140774183 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 15:15:31 compute-0 openstack_network_exporter[204239]: ERROR   15:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:15:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:15:31 compute-0 openstack_network_exporter[204239]: ERROR   15:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:15:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:15:35 compute-0 nova_compute[185191]: 2026-01-27 15:15:35.179 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:35 compute-0 nova_compute[185191]: 2026-01-27 15:15:35.342 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:40 compute-0 nova_compute[185191]: 2026-01-27 15:15:40.182 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:40 compute-0 podman[240664]: 2026-01-27 15:15:40.311304012 +0000 UTC m=+0.067265763 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 15:15:40 compute-0 nova_compute[185191]: 2026-01-27 15:15:40.343 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:44 compute-0 podman[240686]: 2026-01-27 15:15:44.744838258 +0000 UTC m=+0.057684625 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:15:44 compute-0 podman[240685]: 2026-01-27 15:15:44.760387946 +0000 UTC m=+0.077342284 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, version=9.4)
Jan 27 15:15:45 compute-0 nova_compute[185191]: 2026-01-27 15:15:45.185 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:45 compute-0 nova_compute[185191]: 2026-01-27 15:15:45.346 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:50 compute-0 nova_compute[185191]: 2026-01-27 15:15:50.187 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:50 compute-0 nova_compute[185191]: 2026-01-27 15:15:50.348 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:51 compute-0 podman[240726]: 2026-01-27 15:15:51.352875547 +0000 UTC m=+0.111341300 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:15:55 compute-0 nova_compute[185191]: 2026-01-27 15:15:55.191 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:55 compute-0 nova_compute[185191]: 2026-01-27 15:15:55.350 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:15:59 compute-0 podman[240748]: 2026-01-27 15:15:59.344280191 +0000 UTC m=+0.095427122 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:15:59 compute-0 podman[201073]: time="2026-01-27T15:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:15:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:15:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4385 "" "Go-http-client/1.1"
Jan 27 15:16:00 compute-0 nova_compute[185191]: 2026-01-27 15:16:00.196 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:00.227 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:16:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:00.228 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:16:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:00.229 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:16:00 compute-0 nova_compute[185191]: 2026-01-27 15:16:00.352 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:01 compute-0 openstack_network_exporter[204239]: ERROR   15:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:16:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:16:01 compute-0 openstack_network_exporter[204239]: ERROR   15:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:16:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:16:02 compute-0 podman[240772]: 2026-01-27 15:16:02.337008925 +0000 UTC m=+0.082131403 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9)
Jan 27 15:16:02 compute-0 podman[240770]: 2026-01-27 15:16:02.357092956 +0000 UTC m=+0.113108398 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 15:16:02 compute-0 podman[240771]: 2026-01-27 15:16:02.378724949 +0000 UTC m=+0.120598720 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 15:16:05 compute-0 nova_compute[185191]: 2026-01-27 15:16:05.203 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:05 compute-0 nova_compute[185191]: 2026-01-27 15:16:05.355 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:10 compute-0 nova_compute[185191]: 2026-01-27 15:16:10.208 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:10 compute-0 nova_compute[185191]: 2026-01-27 15:16:10.359 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:11 compute-0 podman[240832]: 2026-01-27 15:16:11.323149474 +0000 UTC m=+0.082304128 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:16:15 compute-0 nova_compute[185191]: 2026-01-27 15:16:15.215 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:15 compute-0 podman[240852]: 2026-01-27 15:16:15.302384852 +0000 UTC m=+0.057010277 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:16:15 compute-0 podman[240851]: 2026-01-27 15:16:15.327418466 +0000 UTC m=+0.082634647 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., release=1214.1726694543, config_id=kepler, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 27 15:16:15 compute-0 nova_compute[185191]: 2026-01-27 15:16:15.362 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:20 compute-0 nova_compute[185191]: 2026-01-27 15:16:20.218 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:20 compute-0 nova_compute[185191]: 2026-01-27 15:16:20.364 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:22 compute-0 podman[240894]: 2026-01-27 15:16:22.297099407 +0000 UTC m=+0.054813178 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:16:22 compute-0 nova_compute[185191]: 2026-01-27 15:16:22.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:16:23 compute-0 nova_compute[185191]: 2026-01-27 15:16:23.357 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:16:23 compute-0 nova_compute[185191]: 2026-01-27 15:16:23.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:16:23 compute-0 nova_compute[185191]: 2026-01-27 15:16:23.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.036 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.037 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.038 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.039 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.290 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.346 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.348 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.403 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.404 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.461 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.462 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.520 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.527 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.605 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.611 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.665 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.666 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.759 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.761 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:24 compute-0 nova_compute[185191]: 2026-01-27 15:16:24.822 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:25 compute-0 nova_compute[185191]: 2026-01-27 15:16:25.188 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:16:25 compute-0 nova_compute[185191]: 2026-01-27 15:16:25.190 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5044MB free_disk=72.40019989013672GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:16:25 compute-0 nova_compute[185191]: 2026-01-27 15:16:25.191 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:16:25 compute-0 nova_compute[185191]: 2026-01-27 15:16:25.191 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:16:25 compute-0 nova_compute[185191]: 2026-01-27 15:16:25.220 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:25 compute-0 nova_compute[185191]: 2026-01-27 15:16:25.366 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:25 compute-0 nova_compute[185191]: 2026-01-27 15:16:25.400 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:16:25 compute-0 nova_compute[185191]: 2026-01-27 15:16:25.402 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance b98b01bd-8dfe-4188-be2f-ebffe0bd1717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:16:25 compute-0 nova_compute[185191]: 2026-01-27 15:16:25.403 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:16:25 compute-0 nova_compute[185191]: 2026-01-27 15:16:25.404 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:16:25 compute-0 nova_compute[185191]: 2026-01-27 15:16:25.518 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:16:25 compute-0 nova_compute[185191]: 2026-01-27 15:16:25.589 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:16:25 compute-0 nova_compute[185191]: 2026-01-27 15:16:25.591 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:16:25 compute-0 nova_compute[185191]: 2026-01-27 15:16:25.591 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:16:26 compute-0 nova_compute[185191]: 2026-01-27 15:16:26.590 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:16:26 compute-0 nova_compute[185191]: 2026-01-27 15:16:26.591 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:16:26 compute-0 nova_compute[185191]: 2026-01-27 15:16:26.592 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:16:26 compute-0 nova_compute[185191]: 2026-01-27 15:16:26.867 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:16:26 compute-0 nova_compute[185191]: 2026-01-27 15:16:26.872 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:16:26 compute-0 nova_compute[185191]: 2026-01-27 15:16:26.873 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:16:26 compute-0 nova_compute[185191]: 2026-01-27 15:16:26.873 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:16:29 compute-0 nova_compute[185191]: 2026-01-27 15:16:29.005 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updating instance_info_cache with network_info: [{"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:16:29 compute-0 nova_compute[185191]: 2026-01-27 15:16:29.119 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:16:29 compute-0 nova_compute[185191]: 2026-01-27 15:16:29.119 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:16:29 compute-0 nova_compute[185191]: 2026-01-27 15:16:29.120 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:16:29 compute-0 nova_compute[185191]: 2026-01-27 15:16:29.121 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:16:29 compute-0 nova_compute[185191]: 2026-01-27 15:16:29.122 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:16:29 compute-0 nova_compute[185191]: 2026-01-27 15:16:29.122 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:16:29 compute-0 nova_compute[185191]: 2026-01-27 15:16:29.123 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:16:29 compute-0 nova_compute[185191]: 2026-01-27 15:16:29.124 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:16:29 compute-0 podman[201073]: time="2026-01-27T15:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:16:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:16:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 27 15:16:30 compute-0 nova_compute[185191]: 2026-01-27 15:16:30.229 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:30.300 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:16:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:30.301 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:16:30 compute-0 nova_compute[185191]: 2026-01-27 15:16:30.302 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:30 compute-0 podman[240943]: 2026-01-27 15:16:30.368302699 +0000 UTC m=+0.118312728 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 15:16:30 compute-0 nova_compute[185191]: 2026-01-27 15:16:30.369 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:31 compute-0 openstack_network_exporter[204239]: ERROR   15:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:16:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:16:31 compute-0 openstack_network_exporter[204239]: ERROR   15:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:16:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:16:33 compute-0 podman[240959]: 2026-01-27 15:16:33.354490018 +0000 UTC m=+0.106955282 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:16:33 compute-0 podman[240961]: 2026-01-27 15:16:33.365833274 +0000 UTC m=+0.112479871 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9)
Jan 27 15:16:33 compute-0 podman[240960]: 2026-01-27 15:16:33.378780623 +0000 UTC m=+0.129682695 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 15:16:35 compute-0 nova_compute[185191]: 2026-01-27 15:16:35.233 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:35 compute-0 nova_compute[185191]: 2026-01-27 15:16:35.370 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:36 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:36.302 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:16:40 compute-0 nova_compute[185191]: 2026-01-27 15:16:40.239 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:40 compute-0 nova_compute[185191]: 2026-01-27 15:16:40.373 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:42 compute-0 podman[241018]: 2026-01-27 15:16:42.392072692 +0000 UTC m=+0.134459843 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:16:43 compute-0 nova_compute[185191]: 2026-01-27 15:16:43.038 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:16:43 compute-0 nova_compute[185191]: 2026-01-27 15:16:43.040 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:16:43 compute-0 nova_compute[185191]: 2026-01-27 15:16:43.124 185195 DEBUG nova.compute.manager [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 15:16:43 compute-0 nova_compute[185191]: 2026-01-27 15:16:43.328 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:16:43 compute-0 nova_compute[185191]: 2026-01-27 15:16:43.330 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:16:43 compute-0 nova_compute[185191]: 2026-01-27 15:16:43.344 185195 DEBUG nova.virt.hardware [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 15:16:43 compute-0 nova_compute[185191]: 2026-01-27 15:16:43.345 185195 INFO nova.compute.claims [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Claim successful on node compute-0.ctlplane.example.com
Jan 27 15:16:43 compute-0 nova_compute[185191]: 2026-01-27 15:16:43.623 185195 DEBUG nova.compute.provider_tree [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:16:43 compute-0 nova_compute[185191]: 2026-01-27 15:16:43.653 185195 DEBUG nova.scheduler.client.report [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:16:43 compute-0 nova_compute[185191]: 2026-01-27 15:16:43.713 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:16:43 compute-0 nova_compute[185191]: 2026-01-27 15:16:43.714 185195 DEBUG nova.compute.manager [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 15:16:43 compute-0 nova_compute[185191]: 2026-01-27 15:16:43.793 185195 DEBUG nova.compute.manager [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 15:16:43 compute-0 nova_compute[185191]: 2026-01-27 15:16:43.794 185195 DEBUG nova.network.neutron [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 15:16:43 compute-0 nova_compute[185191]: 2026-01-27 15:16:43.894 185195 INFO nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.024 185195 DEBUG nova.compute.manager [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.278 185195 DEBUG nova.compute.manager [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.281 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.282 185195 INFO nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Creating image(s)
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.284 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "/var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.285 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.286 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.319 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.414 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.415 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.416 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.432 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.512 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.514 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9,backing_fmt=raw /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.594 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9,backing_fmt=raw /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk 1073741824" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.596 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.596 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.654 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.657 185195 DEBUG nova.virt.disk.api [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Checking if we can resize image /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.658 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.720 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.721 185195 DEBUG nova.virt.disk.api [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Cannot resize image /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.722 185195 DEBUG nova.objects.instance [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lazy-loading 'migration_context' on Instance uuid 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.875 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "/var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.876 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.877 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.891 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.981 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.982 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.983 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:16:44 compute-0 nova_compute[185191]: 2026-01-27 15:16:44.994 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:45 compute-0 nova_compute[185191]: 2026-01-27 15:16:45.050 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:45 compute-0 nova_compute[185191]: 2026-01-27 15:16:45.051 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:45 compute-0 nova_compute[185191]: 2026-01-27 15:16:45.097 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:45 compute-0 nova_compute[185191]: 2026-01-27 15:16:45.098 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:16:45 compute-0 nova_compute[185191]: 2026-01-27 15:16:45.099 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:45 compute-0 nova_compute[185191]: 2026-01-27 15:16:45.174 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:45 compute-0 nova_compute[185191]: 2026-01-27 15:16:45.177 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 15:16:45 compute-0 nova_compute[185191]: 2026-01-27 15:16:45.177 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Ensure instance console log exists: /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 15:16:45 compute-0 nova_compute[185191]: 2026-01-27 15:16:45.178 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:16:45 compute-0 nova_compute[185191]: 2026-01-27 15:16:45.179 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:16:45 compute-0 nova_compute[185191]: 2026-01-27 15:16:45.179 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:16:45 compute-0 nova_compute[185191]: 2026-01-27 15:16:45.246 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:45 compute-0 nova_compute[185191]: 2026-01-27 15:16:45.376 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:46 compute-0 podman[241066]: 2026-01-27 15:16:46.365700039 +0000 UTC m=+0.098993288 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:16:46 compute-0 podman[241065]: 2026-01-27 15:16:46.381174596 +0000 UTC m=+0.126811697 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, build-date=2024-09-18T21:23:30, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 27 15:16:47 compute-0 nova_compute[185191]: 2026-01-27 15:16:47.184 185195 DEBUG nova.network.neutron [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Successfully updated port: 0828fa2e-a05a-47f8-aab3-325c1f3f2c06 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 15:16:47 compute-0 nova_compute[185191]: 2026-01-27 15:16:47.343 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "refresh_cache-221a9a46-46a7-4a1b-ad5b-5d1eca64c106" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:16:47 compute-0 nova_compute[185191]: 2026-01-27 15:16:47.344 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquired lock "refresh_cache-221a9a46-46a7-4a1b-ad5b-5d1eca64c106" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:16:47 compute-0 nova_compute[185191]: 2026-01-27 15:16:47.344 185195 DEBUG nova.network.neutron [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:16:47 compute-0 nova_compute[185191]: 2026-01-27 15:16:47.609 185195 DEBUG nova.compute.manager [req-d6c9dd25-3551-446f-b0ba-103057281964 req-6c236d17-90e0-4711-aa1a-18d2acc1648c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Received event network-changed-0828fa2e-a05a-47f8-aab3-325c1f3f2c06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:16:47 compute-0 nova_compute[185191]: 2026-01-27 15:16:47.610 185195 DEBUG nova.compute.manager [req-d6c9dd25-3551-446f-b0ba-103057281964 req-6c236d17-90e0-4711-aa1a-18d2acc1648c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Refreshing instance network info cache due to event network-changed-0828fa2e-a05a-47f8-aab3-325c1f3f2c06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:16:47 compute-0 nova_compute[185191]: 2026-01-27 15:16:47.611 185195 DEBUG oslo_concurrency.lockutils [req-d6c9dd25-3551-446f-b0ba-103057281964 req-6c236d17-90e0-4711-aa1a-18d2acc1648c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-221a9a46-46a7-4a1b-ad5b-5d1eca64c106" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:16:47 compute-0 nova_compute[185191]: 2026-01-27 15:16:47.687 185195 DEBUG nova.network.neutron [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.440 185195 DEBUG nova.network.neutron [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Updating instance_info_cache with network_info: [{"id": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "address": "fa:16:3e:ff:42:e6", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0828fa2e-a0", "ovs_interfaceid": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.522 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Releasing lock "refresh_cache-221a9a46-46a7-4a1b-ad5b-5d1eca64c106" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.522 185195 DEBUG nova.compute.manager [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Instance network_info: |[{"id": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "address": "fa:16:3e:ff:42:e6", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0828fa2e-a0", "ovs_interfaceid": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.523 185195 DEBUG oslo_concurrency.lockutils [req-d6c9dd25-3551-446f-b0ba-103057281964 req-6c236d17-90e0-4711-aa1a-18d2acc1648c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-221a9a46-46a7-4a1b-ad5b-5d1eca64c106" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.524 185195 DEBUG nova.network.neutron [req-d6c9dd25-3551-446f-b0ba-103057281964 req-6c236d17-90e0-4711-aa1a-18d2acc1648c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Refreshing network info cache for port 0828fa2e-a05a-47f8-aab3-325c1f3f2c06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.528 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Start _get_guest_xml network_info=[{"id": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "address": "fa:16:3e:ff:42:e6", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0828fa2e-a0", "ovs_interfaceid": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T15:08:48Z,direct_url=<?>,disk_format='qcow2',id=2b336e4b-c98e-4b97-9f8f-b3290e6b6caf,min_disk=0,min_ram=0,name='cirros',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T15:08:49Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}], 'ephemerals': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'size': 1, 'guest_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.537 185195 WARNING nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.544 185195 DEBUG nova.virt.libvirt.host [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.545 185195 DEBUG nova.virt.libvirt.host [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.550 185195 DEBUG nova.virt.libvirt.host [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.551 185195 DEBUG nova.virt.libvirt.host [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.551 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.552 185195 DEBUG nova.virt.hardware [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T15:08:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='26a24ace-a5af-47b3-9314-7d2b9e74c6b8',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T15:08:48Z,direct_url=<?>,disk_format='qcow2',id=2b336e4b-c98e-4b97-9f8f-b3290e6b6caf,min_disk=0,min_ram=0,name='cirros',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T15:08:49Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.553 185195 DEBUG nova.virt.hardware [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.553 185195 DEBUG nova.virt.hardware [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.554 185195 DEBUG nova.virt.hardware [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.555 185195 DEBUG nova.virt.hardware [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.555 185195 DEBUG nova.virt.hardware [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.556 185195 DEBUG nova.virt.hardware [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.556 185195 DEBUG nova.virt.hardware [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.557 185195 DEBUG nova.virt.hardware [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.558 185195 DEBUG nova.virt.hardware [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.558 185195 DEBUG nova.virt.hardware [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.564 185195 DEBUG nova.virt.libvirt.vif [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:16:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-etv33tn-yssw4loy7awz-x7cghs3h76zg-vnf-qosycueqkm5v',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-etv33tn-yssw4loy7awz-x7cghs3h76zg-vnf-qosycueqkm5v',id=3,image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='92e45285-9077-420c-bb23-df5c16dca6b3'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd88ca4062da4fb9bedb3a0002a43c12',ramdisk_id='',reservation_id='r-stizfz0m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:16:44Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT02ODAwODM2MTA3NTkwMzgxNzQxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTY4MDA4MzYxMDc1OTAzODE3NDE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NjgwMDgzNjEwNzU5MDM4MTc0MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTY4MDA4MzYxMDc1OTAzODE3NDE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT02ODAwODM2MTA3NTkwMzgxNzQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT02ODAwODM2MTA3NTkwMzgxNzQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 27 15:16:49 compute-0 nova_compute[185191]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NjgwMDgzNjEwNzU5MDM4MTc0MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTY4MDA4MzYxMDc1OTAzODE3NDE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT02ODAwODM2MTA3NTkwMzgxNzQxPT0tLQo=',user_id='24260fb24da44b10b598f9c822c026b8',uuid=221a9a46-46a7-4a1b-ad5b-5d1eca64c106,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "address": "fa:16:3e:ff:42:e6", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0828fa2e-a0", "ovs_interfaceid": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.565 185195 DEBUG nova.network.os_vif_util [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converting VIF {"id": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "address": "fa:16:3e:ff:42:e6", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0828fa2e-a0", "ovs_interfaceid": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.567 185195 DEBUG nova.network.os_vif_util [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:42:e6,bridge_name='br-int',has_traffic_filtering=True,id=0828fa2e-a05a-47f8-aab3-325c1f3f2c06,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0828fa2e-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.568 185195 DEBUG nova.objects.instance [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lazy-loading 'pci_devices' on Instance uuid 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.623 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] End _get_guest_xml xml=<domain type="kvm">
Jan 27 15:16:49 compute-0 nova_compute[185191]:   <uuid>221a9a46-46a7-4a1b-ad5b-5d1eca64c106</uuid>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   <name>instance-00000003</name>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   <memory>524288</memory>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   <vcpu>1</vcpu>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   <metadata>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <nova:name>vn-etv33tn-yssw4loy7awz-x7cghs3h76zg-vnf-qosycueqkm5v</nova:name>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <nova:creationTime>2026-01-27 15:16:49</nova:creationTime>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <nova:flavor name="m1.small">
Jan 27 15:16:49 compute-0 nova_compute[185191]:         <nova:memory>512</nova:memory>
Jan 27 15:16:49 compute-0 nova_compute[185191]:         <nova:disk>1</nova:disk>
Jan 27 15:16:49 compute-0 nova_compute[185191]:         <nova:swap>0</nova:swap>
Jan 27 15:16:49 compute-0 nova_compute[185191]:         <nova:ephemeral>1</nova:ephemeral>
Jan 27 15:16:49 compute-0 nova_compute[185191]:         <nova:vcpus>1</nova:vcpus>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       </nova:flavor>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <nova:owner>
Jan 27 15:16:49 compute-0 nova_compute[185191]:         <nova:user uuid="24260fb24da44b10b598f9c822c026b8">admin</nova:user>
Jan 27 15:16:49 compute-0 nova_compute[185191]:         <nova:project uuid="dd88ca4062da4fb9bedb3a0002a43c12">admin</nova:project>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       </nova:owner>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <nova:root type="image" uuid="2b336e4b-c98e-4b97-9f8f-b3290e6b6caf"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <nova:ports>
Jan 27 15:16:49 compute-0 nova_compute[185191]:         <nova:port uuid="0828fa2e-a05a-47f8-aab3-325c1f3f2c06">
Jan 27 15:16:49 compute-0 nova_compute[185191]:           <nova:ip type="fixed" address="192.168.0.205" ipVersion="4"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:         </nova:port>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       </nova:ports>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     </nova:instance>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   </metadata>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   <sysinfo type="smbios">
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <system>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <entry name="manufacturer">RDO</entry>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <entry name="product">OpenStack Compute</entry>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <entry name="serial">221a9a46-46a7-4a1b-ad5b-5d1eca64c106</entry>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <entry name="uuid">221a9a46-46a7-4a1b-ad5b-5d1eca64c106</entry>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <entry name="family">Virtual Machine</entry>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     </system>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   </sysinfo>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   <os>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <boot dev="hd"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <smbios mode="sysinfo"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   </os>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   <features>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <acpi/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <apic/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <vmcoreinfo/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   </features>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   <clock offset="utc">
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <timer name="hpet" present="no"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   </clock>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   <cpu mode="host-model" match="exact">
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <target dev="vda" bus="virtio"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <target dev="vdb" bus="virtio"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <disk type="file" device="cdrom">
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.config"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <target dev="sda" bus="sata"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <interface type="ethernet">
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <mac address="fa:16:3e:ff:42:e6"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <mtu size="1442"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <target dev="tap0828fa2e-a0"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <serial type="pty">
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <log file="/var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/console.log" append="off"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     </serial>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <video>
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     </video>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <input type="tablet" bus="usb"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <rng model="virtio">
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <backend model="random">/dev/urandom</backend>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <controller type="usb" index="0"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     <memballoon model="virtio">
Jan 27 15:16:49 compute-0 nova_compute[185191]:       <stats period="10"/>
Jan 27 15:16:49 compute-0 nova_compute[185191]:     </memballoon>
Jan 27 15:16:49 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:16:49 compute-0 nova_compute[185191]: </domain>
Jan 27 15:16:49 compute-0 nova_compute[185191]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.624 185195 DEBUG nova.compute.manager [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Preparing to wait for external event network-vif-plugged-0828fa2e-a05a-47f8-aab3-325c1f3f2c06 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.625 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.625 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.625 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.626 185195 DEBUG nova.virt.libvirt.vif [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:16:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-etv33tn-yssw4loy7awz-x7cghs3h76zg-vnf-qosycueqkm5v',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-etv33tn-yssw4loy7awz-x7cghs3h76zg-vnf-qosycueqkm5v',id=3,image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='92e45285-9077-420c-bb23-df5c16dca6b3'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd88ca4062da4fb9bedb3a0002a43c12',ramdisk_id='',reservation_id='r-stizfz0m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:16:44Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT02ODAwODM2MTA3NTkwMzgxNzQxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTY4MDA4MzYxMDc1OTAzODE3NDE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NjgwMDgzNjEwNzU5MDM4MTc0MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTY4MDA4MzYxMDc1OTAzODE3NDE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT02ODAwODM2MTA3NTkwMzgxNzQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT02ODAwODM2MTA3NTkwMzgxNzQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 27 15:16:49 compute-0 nova_compute[185191]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NjgwMDgzNjEwNzU5MDM4MTc0MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTY4MDA4MzYxMDc1OTAzODE3NDE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT02ODAwODM2MTA3NTkwMzgxNzQxPT0tLQo=',user_id='24260fb24da44b10b598f9c822c026b8',uuid=221a9a46-46a7-4a1b-ad5b-5d1eca64c106,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "address": "fa:16:3e:ff:42:e6", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0828fa2e-a0", "ovs_interfaceid": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.626 185195 DEBUG nova.network.os_vif_util [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converting VIF {"id": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "address": "fa:16:3e:ff:42:e6", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0828fa2e-a0", "ovs_interfaceid": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.627 185195 DEBUG nova.network.os_vif_util [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:42:e6,bridge_name='br-int',has_traffic_filtering=True,id=0828fa2e-a05a-47f8-aab3-325c1f3f2c06,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0828fa2e-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.628 185195 DEBUG os_vif [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:42:e6,bridge_name='br-int',has_traffic_filtering=True,id=0828fa2e-a05a-47f8-aab3-325c1f3f2c06,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0828fa2e-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.628 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.629 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.629 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.633 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.634 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0828fa2e-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.634 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0828fa2e-a0, col_values=(('external_ids', {'iface-id': '0828fa2e-a05a-47f8-aab3-325c1f3f2c06', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:42:e6', 'vm-uuid': '221a9a46-46a7-4a1b-ad5b-5d1eca64c106'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:16:49 compute-0 NetworkManager[56090]: <info>  [1769527009.6376] manager: (tap0828fa2e-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.636 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.639 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.651 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:49 compute-0 nova_compute[185191]: 2026-01-27 15:16:49.652 185195 INFO os_vif [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:42:e6,bridge_name='br-int',has_traffic_filtering=True,id=0828fa2e-a05a-47f8-aab3-325c1f3f2c06,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0828fa2e-a0')
Jan 27 15:16:49 compute-0 rsyslogd[235702]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 15:16:49.564 185195 DEBUG nova.virt.libvirt.vif [None req-48a4c81e-94 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 15:16:49 compute-0 rsyslogd[235702]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 15:16:49.626 185195 DEBUG nova.virt.libvirt.vif [None req-48a4c81e-94 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 15:16:50 compute-0 nova_compute[185191]: 2026-01-27 15:16:50.076 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:16:50 compute-0 nova_compute[185191]: 2026-01-27 15:16:50.076 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:16:50 compute-0 nova_compute[185191]: 2026-01-27 15:16:50.077 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:16:50 compute-0 nova_compute[185191]: 2026-01-27 15:16:50.078 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No VIF found with MAC fa:16:3e:ff:42:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 15:16:50 compute-0 nova_compute[185191]: 2026-01-27 15:16:50.079 185195 INFO nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Using config drive
Jan 27 15:16:50 compute-0 nova_compute[185191]: 2026-01-27 15:16:50.379 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:50 compute-0 nova_compute[185191]: 2026-01-27 15:16:50.449 185195 INFO nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Creating config drive at /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.config
Jan 27 15:16:50 compute-0 nova_compute[185191]: 2026-01-27 15:16:50.463 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoqzyi8yx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:16:50 compute-0 nova_compute[185191]: 2026-01-27 15:16:50.622 185195 DEBUG oslo_concurrency.processutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoqzyi8yx" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:16:50 compute-0 NetworkManager[56090]: <info>  [1769527010.7239] manager: (tap0828fa2e-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Jan 27 15:16:50 compute-0 kernel: tap0828fa2e-a0: entered promiscuous mode
Jan 27 15:16:50 compute-0 nova_compute[185191]: 2026-01-27 15:16:50.728 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:50 compute-0 ovn_controller[97541]: 2026-01-27T15:16:50Z|00040|binding|INFO|Claiming lport 0828fa2e-a05a-47f8-aab3-325c1f3f2c06 for this chassis.
Jan 27 15:16:50 compute-0 ovn_controller[97541]: 2026-01-27T15:16:50Z|00041|binding|INFO|0828fa2e-a05a-47f8-aab3-325c1f3f2c06: Claiming fa:16:3e:ff:42:e6 192.168.0.205
Jan 27 15:16:50 compute-0 ovn_controller[97541]: 2026-01-27T15:16:50Z|00042|binding|INFO|Setting lport 0828fa2e-a05a-47f8-aab3-325c1f3f2c06 ovn-installed in OVS
Jan 27 15:16:50 compute-0 nova_compute[185191]: 2026-01-27 15:16:50.764 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:50 compute-0 systemd-machined[156506]: New machine qemu-3-instance-00000003.
Jan 27 15:16:50 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Jan 27 15:16:50 compute-0 systemd-udevd[241129]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:16:50 compute-0 ovn_controller[97541]: 2026-01-27T15:16:50Z|00043|binding|INFO|Setting lport 0828fa2e-a05a-47f8-aab3-325c1f3f2c06 up in Southbound
Jan 27 15:16:50 compute-0 NetworkManager[56090]: <info>  [1769527010.8396] device (tap0828fa2e-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 15:16:50 compute-0 nova_compute[185191]: 2026-01-27 15:16:50.840 185195 DEBUG nova.network.neutron [req-d6c9dd25-3551-446f-b0ba-103057281964 req-6c236d17-90e0-4711-aa1a-18d2acc1648c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Updated VIF entry in instance network info cache for port 0828fa2e-a05a-47f8-aab3-325c1f3f2c06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:16:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:50.836 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:42:e6 192.168.0.205'], port_security=['fa:16:3e:ff:42:e6 192.168.0.205'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-smi76etv33tn-yssw4loy7awz-x7cghs3h76zg-port-yufms5xkxqrs', 'neutron:cidrs': '192.168.0.205/24', 'neutron:device_id': '221a9a46-46a7-4a1b-ad5b-5d1eca64c106', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7e37fe5-6354-4f61-95d0-78632be96811', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-smi76etv33tn-yssw4loy7awz-x7cghs3h76zg-port-yufms5xkxqrs', 'neutron:project_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'neutron:revision_number': '2', 'neutron:security_group_ids': '812ec3a5-800e-4a9a-a5c1-7429aedf7716', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=764c6ac9-6147-480d-b23c-048fbe883747, chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=0828fa2e-a05a-47f8-aab3-325c1f3f2c06) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:16:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:50.839 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 0828fa2e-a05a-47f8-aab3-325c1f3f2c06 in datapath d7e37fe5-6354-4f61-95d0-78632be96811 bound to our chassis
Jan 27 15:16:50 compute-0 nova_compute[185191]: 2026-01-27 15:16:50.841 185195 DEBUG nova.network.neutron [req-d6c9dd25-3551-446f-b0ba-103057281964 req-6c236d17-90e0-4711-aa1a-18d2acc1648c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Updating instance_info_cache with network_info: [{"id": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "address": "fa:16:3e:ff:42:e6", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0828fa2e-a0", "ovs_interfaceid": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:16:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:50.843 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7e37fe5-6354-4f61-95d0-78632be96811
Jan 27 15:16:50 compute-0 NetworkManager[56090]: <info>  [1769527010.8503] device (tap0828fa2e-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 15:16:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:50.861 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6084c1c9-0fda-405b-a4b0-fcc22b196765]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:16:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:50.897 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8fa007-68da-49af-b7f5-0796ede1153f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:16:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:50.902 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[6944b017-47ec-49f4-a6ce-87c7eaee8ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:16:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:50.930 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[2b90fa29-4ba9-4b39-bc84-5d4f905deaae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:16:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:50.946 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6a76f652-3fa9-45b4-bb98-fb10fbaa44ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7e37fe5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:72:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 420463, 'reachable_time': 36147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241143, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:16:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:50.960 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0e42df-de16-4646-b6e3-3a33ba15fb51]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapd7e37fe5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420478, 'tstamp': 420478}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241144, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7e37fe5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420481, 'tstamp': 420481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241144, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:16:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:50.961 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7e37fe5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:16:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:50.964 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7e37fe5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:16:50 compute-0 nova_compute[185191]: 2026-01-27 15:16:50.964 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:50.965 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:16:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:50.966 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7e37fe5-60, col_values=(('external_ids', {'iface-id': 'd4262905-2cdc-4929-a155-db8204d90ca2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:16:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:16:50.966 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.072 185195 DEBUG oslo_concurrency.lockutils [req-d6c9dd25-3551-446f-b0ba-103057281964 req-6c236d17-90e0-4711-aa1a-18d2acc1648c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-221a9a46-46a7-4a1b-ad5b-5d1eca64c106" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.131 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769527011.1311827, 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.132 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] VM Started (Lifecycle Event)
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.325 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.331 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769527011.131264, 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.331 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] VM Paused (Lifecycle Event)
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.629 185195 DEBUG nova.compute.manager [req-ac179bc0-4271-41eb-a67e-54c89c6371bd req-f7266615-c256-4f10-b677-a1e443ea3b21 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Received event network-vif-plugged-0828fa2e-a05a-47f8-aab3-325c1f3f2c06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.629 185195 DEBUG oslo_concurrency.lockutils [req-ac179bc0-4271-41eb-a67e-54c89c6371bd req-f7266615-c256-4f10-b677-a1e443ea3b21 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.630 185195 DEBUG oslo_concurrency.lockutils [req-ac179bc0-4271-41eb-a67e-54c89c6371bd req-f7266615-c256-4f10-b677-a1e443ea3b21 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.630 185195 DEBUG oslo_concurrency.lockutils [req-ac179bc0-4271-41eb-a67e-54c89c6371bd req-f7266615-c256-4f10-b677-a1e443ea3b21 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.630 185195 DEBUG nova.compute.manager [req-ac179bc0-4271-41eb-a67e-54c89c6371bd req-f7266615-c256-4f10-b677-a1e443ea3b21 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Processing event network-vif-plugged-0828fa2e-a05a-47f8-aab3-325c1f3f2c06 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.631 185195 DEBUG nova.compute.manager [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.637 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.640 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.648 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769527011.6356175, 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.649 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] VM Resumed (Lifecycle Event)
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.651 185195 INFO nova.virt.libvirt.driver [-] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Instance spawned successfully.
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.651 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.892 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.892 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.893 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.893 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.893 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.893 185195 DEBUG nova.virt.libvirt.driver [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.926 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:16:51 compute-0 nova_compute[185191]: 2026-01-27 15:16:51.931 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:16:52 compute-0 nova_compute[185191]: 2026-01-27 15:16:52.149 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:16:52 compute-0 nova_compute[185191]: 2026-01-27 15:16:52.263 185195 INFO nova.compute.manager [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Took 7.98 seconds to spawn the instance on the hypervisor.
Jan 27 15:16:52 compute-0 nova_compute[185191]: 2026-01-27 15:16:52.264 185195 DEBUG nova.compute.manager [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:16:52 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 15:16:52 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 15:16:52 compute-0 nova_compute[185191]: 2026-01-27 15:16:52.458 185195 INFO nova.compute.manager [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Took 9.18 seconds to build instance.
Jan 27 15:16:52 compute-0 podman[241152]: 2026-01-27 15:16:52.53016883 +0000 UTC m=+0.096927342 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:16:52 compute-0 nova_compute[185191]: 2026-01-27 15:16:52.571 185195 DEBUG oslo_concurrency.lockutils [None req-48a4c81e-9432-4986-9a19-b11f34ee7839 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:16:54 compute-0 nova_compute[185191]: 2026-01-27 15:16:54.011 185195 DEBUG nova.compute.manager [req-7f6e4b43-28dc-4ffd-8f8b-cd80c312f749 req-d691684e-7685-4ea7-8a63-71501d32e192 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Received event network-vif-plugged-0828fa2e-a05a-47f8-aab3-325c1f3f2c06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:16:54 compute-0 nova_compute[185191]: 2026-01-27 15:16:54.012 185195 DEBUG oslo_concurrency.lockutils [req-7f6e4b43-28dc-4ffd-8f8b-cd80c312f749 req-d691684e-7685-4ea7-8a63-71501d32e192 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:16:54 compute-0 nova_compute[185191]: 2026-01-27 15:16:54.013 185195 DEBUG oslo_concurrency.lockutils [req-7f6e4b43-28dc-4ffd-8f8b-cd80c312f749 req-d691684e-7685-4ea7-8a63-71501d32e192 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:16:54 compute-0 nova_compute[185191]: 2026-01-27 15:16:54.013 185195 DEBUG oslo_concurrency.lockutils [req-7f6e4b43-28dc-4ffd-8f8b-cd80c312f749 req-d691684e-7685-4ea7-8a63-71501d32e192 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:16:54 compute-0 nova_compute[185191]: 2026-01-27 15:16:54.014 185195 DEBUG nova.compute.manager [req-7f6e4b43-28dc-4ffd-8f8b-cd80c312f749 req-d691684e-7685-4ea7-8a63-71501d32e192 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] No waiting events found dispatching network-vif-plugged-0828fa2e-a05a-47f8-aab3-325c1f3f2c06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:16:54 compute-0 nova_compute[185191]: 2026-01-27 15:16:54.015 185195 WARNING nova.compute.manager [req-7f6e4b43-28dc-4ffd-8f8b-cd80c312f749 req-d691684e-7685-4ea7-8a63-71501d32e192 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Received unexpected event network-vif-plugged-0828fa2e-a05a-47f8-aab3-325c1f3f2c06 for instance with vm_state active and task_state None.
Jan 27 15:16:54 compute-0 nova_compute[185191]: 2026-01-27 15:16:54.639 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:55 compute-0 nova_compute[185191]: 2026-01-27 15:16:55.384 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:59 compute-0 nova_compute[185191]: 2026-01-27 15:16:59.645 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:16:59 compute-0 podman[201073]: time="2026-01-27T15:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:16:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:16:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Jan 27 15:17:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:17:00.228 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:17:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:17:00.228 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:17:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:17:00.229 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:17:00 compute-0 nova_compute[185191]: 2026-01-27 15:17:00.387 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:01 compute-0 podman[241195]: 2026-01-27 15:17:01.315834229 +0000 UTC m=+0.076617075 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:17:01 compute-0 openstack_network_exporter[204239]: ERROR   15:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:17:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:17:01 compute-0 openstack_network_exporter[204239]: ERROR   15:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:17:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:17:04 compute-0 podman[241213]: 2026-01-27 15:17:04.368358824 +0000 UTC m=+0.115252016 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260126, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Jan 27 15:17:04 compute-0 podman[241215]: 2026-01-27 15:17:04.384435587 +0000 UTC m=+0.124752191 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Jan 27 15:17:04 compute-0 podman[241214]: 2026-01-27 15:17:04.42426939 +0000 UTC m=+0.164658256 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 27 15:17:04 compute-0 nova_compute[185191]: 2026-01-27 15:17:04.648 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:05 compute-0 nova_compute[185191]: 2026-01-27 15:17:05.392 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:09 compute-0 nova_compute[185191]: 2026-01-27 15:17:09.649 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:10 compute-0 nova_compute[185191]: 2026-01-27 15:17:10.394 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.985 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.986 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.993 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 15:17:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:10.996 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/221a9a46-46a7-4a1b-ad5b-5d1eca64c106 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82c957adbc17ae7d91b95e243ef95edcae050b803dbf40e883e7549d3d32b40a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.006 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Tue, 27 Jan 2026 15:17:11 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-f3e44e3f-fd31-499c-889f-0698ecfffe41 x-openstack-request-id: req-f3e44e3f-fd31-499c-889f-0698ecfffe41 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.006 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "221a9a46-46a7-4a1b-ad5b-5d1eca64c106", "name": "vn-etv33tn-yssw4loy7awz-x7cghs3h76zg-vnf-qosycueqkm5v", "status": "ACTIVE", "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "user_id": "24260fb24da44b10b598f9c822c026b8", "metadata": {"metering.server_group": "92e45285-9077-420c-bb23-df5c16dca6b3"}, "hostId": "3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb", "image": {"id": "2b336e4b-c98e-4b97-9f8f-b3290e6b6caf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/2b336e4b-c98e-4b97-9f8f-b3290e6b6caf"}]}, "flavor": {"id": "26a24ace-a5af-47b3-9314-7d2b9e74c6b8", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/26a24ace-a5af-47b3-9314-7d2b9e74c6b8"}]}, "created": "2026-01-27T15:16:38Z", "updated": "2026-01-27T15:16:52Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.205", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ff:42:e6"}, {"version": 4, "addr": "192.168.122.217", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ff:42:e6"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/221a9a46-46a7-4a1b-ad5b-5d1eca64c106"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/221a9a46-46a7-4a1b-ad5b-5d1eca64c106"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-27T15:16:52.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.006 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/221a9a46-46a7-4a1b-ad5b-5d1eca64c106 used request id req-f3e44e3f-fd31-499c-889f-0698ecfffe41 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.007 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '221a9a46-46a7-4a1b-ad5b-5d1eca64c106', 'name': 'vn-etv33tn-yssw4loy7awz-x7cghs3h76zg-vnf-qosycueqkm5v', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {'metering.server_group': '92e45285-9077-420c-bb23-df5c16dca6b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.010 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8c4af6eb-340b-477f-83d2-11aa7ab0b9d3', 'name': 'test_0', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.012 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b98b01bd-8dfe-4188-be2f-ebffe0bd1717', 'name': 'vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {'metering.server_group': '92e45285-9077-420c-bb23-df5c16dca6b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.013 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:17:12.013376) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.076 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.076 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.077 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.140 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 3771884583 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.140 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 11291751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.140 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.210 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.latency volume: 1674942485 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.211 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.latency volume: 11551078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.211 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.212 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.212 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.212 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.212 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.212 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.212 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.213 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:17:12.212930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.213 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.214 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.214 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.215 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.215 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.215 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.215 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.215 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.216 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.216 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.216 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.216 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.216 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.217 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.217 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:17:12.217129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.242 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.243 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.243 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.263 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.263 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.264 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.300 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.300 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.301 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.301 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.302 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:17:12.302279) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.305 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 / tap0828fa2e-a0 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.305 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.308 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.311 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.312 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.312 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.312 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.312 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.313 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.313 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.314 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.314 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.314 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.315 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.315 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.315 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.315 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.316 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:17:12.313007) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:17:12.314358) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.316 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.317 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.317 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.317 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.317 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:17:12.315562) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.318 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:17:12.317479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.339 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/cpu volume: 20280000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.358 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/cpu volume: 38940000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.377 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/cpu volume: 251460000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.378 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.378 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.378 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.378 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.378 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.379 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.379 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.379 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.379 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.379 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.379 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.380 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.380 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.380 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.380 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.380 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.380 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.packets volume: 44 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.381 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.381 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.381 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.381 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.381 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.381 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.381 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 221a9a46-46a7-4a1b-ad5b-5d1eca64c106: ceilometer.compute.pollsters.NoVolumeException
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.381 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/memory.usage volume: 48.9765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.381 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/memory.usage volume: 49.16015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.382 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.382 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.382 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.382 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.383 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.383 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.383 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.383 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.383 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.383 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.383 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.384 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-etv33tn-yssw4loy7awz-x7cghs3h76zg-vnf-qosycueqkm5v>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-etv33tn-yssw4loy7awz-x7cghs3h76zg-vnf-qosycueqkm5v>]
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.384 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.384 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.384 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.384 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.384 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.384 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.384 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.385 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.385 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.386 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.386 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.386 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.386 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.386 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.387 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.387 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.387 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.387 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.387 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.387 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.387 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.387 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.387 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.bytes volume: 4962 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.388 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.388 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.388 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.388 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.388 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.388 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.388 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.388 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.389 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.389 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.389 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.389 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.389 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.389 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.390 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.390 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.390 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.390 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.390 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.390 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.390 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.390 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.391 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.385 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:17:12.378996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.391 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.391 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.391 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.391 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.391 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.391 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.391 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.392 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.392 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.392 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.392 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.392 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.393 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.393 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.393 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.393 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.394 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.394 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.394 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.394 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.394 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.394 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.394 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.394 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.395 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.395 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.395 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.395 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.395 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.395 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.latency volume: 681123614 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.395 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.395 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.latency volume: 3691070 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.395 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 1242591197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.396 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 114890665 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.396 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 113913681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.396 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.latency volume: 697193681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.393 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:17:12.380227) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.396 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.latency volume: 100159582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.396 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.latency volume: 227319301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.397 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.397 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.397 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.397 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.397 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.397 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.397 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.397 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-etv33tn-yssw4loy7awz-x7cghs3h76zg-vnf-qosycueqkm5v>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-etv33tn-yssw4loy7awz-x7cghs3h76zg-vnf-qosycueqkm5v>]
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.397 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.397 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.397 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.397 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.397 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.398 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.398 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.398 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.398 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.398 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.398 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.399 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.399 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.399 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.399 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.399 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.399 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.400 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.400 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.400 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.400 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.400 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.400 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.400 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.400 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.401 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.401 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.401 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.401 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.402 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.402 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.402 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.402 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.402 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.402 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.402 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.402 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.402 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.397 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:17:12.381427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.403 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.403 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.403 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.403 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.403 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.403 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.403 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.403 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.404 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.404 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.404 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.404 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.404 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.404 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.405 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.405 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:17:12.382710) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T15:17:12.383879) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:17:12.384516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:17:12.386269) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.410 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:17:12.387378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.410 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:17:12.388463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.411 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:17:12.390741) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.411 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:17:12.391773) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.412 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:17:12.394181) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.412 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:17:12.395269) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T15:17:12.397389) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:17:12.397972) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:17:12.400115) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:17:12.402405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:17:12.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:17:12.403438) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:17:13 compute-0 podman[241280]: 2026-01-27 15:17:13.317234308 +0000 UTC m=+0.067652133 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 15:17:14 compute-0 nova_compute[185191]: 2026-01-27 15:17:14.651 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:15 compute-0 nova_compute[185191]: 2026-01-27 15:17:15.397 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:17 compute-0 podman[241303]: 2026-01-27 15:17:17.339582977 +0000 UTC m=+0.093137980 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:17:17 compute-0 podman[241302]: 2026-01-27 15:17:17.344542621 +0000 UTC m=+0.101137435 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, io.buildah.version=1.29.0, version=9.4, release=1214.1726694543, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, build-date=2024-09-18T21:23:30)
Jan 27 15:17:19 compute-0 nova_compute[185191]: 2026-01-27 15:17:19.653 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:20 compute-0 nova_compute[185191]: 2026-01-27 15:17:20.399 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:21 compute-0 ovn_controller[97541]: 2026-01-27T15:17:21Z|00044|memory_trim|INFO|Detected inactivity (last active 30023 ms ago): trimming memory
Jan 27 15:17:23 compute-0 podman[241343]: 2026-01-27 15:17:23.339815205 +0000 UTC m=+0.094310741 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:17:23 compute-0 nova_compute[185191]: 2026-01-27 15:17:23.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:17:23 compute-0 nova_compute[185191]: 2026-01-27 15:17:23.947 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:17:23 compute-0 nova_compute[185191]: 2026-01-27 15:17:23.983 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:17:23 compute-0 nova_compute[185191]: 2026-01-27 15:17:23.984 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:17:23 compute-0 nova_compute[185191]: 2026-01-27 15:17:23.985 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:17:23 compute-0 nova_compute[185191]: 2026-01-27 15:17:23.985 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.134 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.235 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.237 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.298 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.300 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:17:24 compute-0 ovn_controller[97541]: 2026-01-27T15:17:24Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ff:42:e6 192.168.0.205
Jan 27 15:17:24 compute-0 sshd-session[241380]: Invalid user solv from 2.57.122.238 port 34958
Jan 27 15:17:24 compute-0 ovn_controller[97541]: 2026-01-27T15:17:24Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:42:e6 192.168.0.205
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.420 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.422 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:17:24 compute-0 sshd-session[241380]: Connection closed by invalid user solv 2.57.122.238 port 34958 [preauth]
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.492 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.512 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.600 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.606 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.656 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.669 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.670 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.731 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.733 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.794 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.806 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.866 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.868 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.934 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.937 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:17:24 compute-0 nova_compute[185191]: 2026-01-27 15:17:24.998 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:17:25 compute-0 nova_compute[185191]: 2026-01-27 15:17:25.000 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:17:25 compute-0 nova_compute[185191]: 2026-01-27 15:17:25.062 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:17:25 compute-0 nova_compute[185191]: 2026-01-27 15:17:25.401 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:25 compute-0 nova_compute[185191]: 2026-01-27 15:17:25.435 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:17:25 compute-0 nova_compute[185191]: 2026-01-27 15:17:25.436 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4874MB free_disk=72.38055801391602GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:17:25 compute-0 nova_compute[185191]: 2026-01-27 15:17:25.437 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:17:25 compute-0 nova_compute[185191]: 2026-01-27 15:17:25.437 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:17:25 compute-0 nova_compute[185191]: 2026-01-27 15:17:25.556 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:17:25 compute-0 nova_compute[185191]: 2026-01-27 15:17:25.556 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance b98b01bd-8dfe-4188-be2f-ebffe0bd1717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:17:25 compute-0 nova_compute[185191]: 2026-01-27 15:17:25.556 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:17:25 compute-0 nova_compute[185191]: 2026-01-27 15:17:25.557 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:17:25 compute-0 nova_compute[185191]: 2026-01-27 15:17:25.557 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:17:25 compute-0 nova_compute[185191]: 2026-01-27 15:17:25.659 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:17:25 compute-0 nova_compute[185191]: 2026-01-27 15:17:25.687 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:17:25 compute-0 nova_compute[185191]: 2026-01-27 15:17:25.758 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:17:25 compute-0 nova_compute[185191]: 2026-01-27 15:17:25.759 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:17:26 compute-0 nova_compute[185191]: 2026-01-27 15:17:26.756 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:17:26 compute-0 nova_compute[185191]: 2026-01-27 15:17:26.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:17:26 compute-0 nova_compute[185191]: 2026-01-27 15:17:26.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:17:27 compute-0 nova_compute[185191]: 2026-01-27 15:17:27.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:17:27 compute-0 nova_compute[185191]: 2026-01-27 15:17:27.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:17:28 compute-0 nova_compute[185191]: 2026-01-27 15:17:28.850 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:17:28 compute-0 nova_compute[185191]: 2026-01-27 15:17:28.851 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:17:28 compute-0 nova_compute[185191]: 2026-01-27 15:17:28.851 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:17:29 compute-0 nova_compute[185191]: 2026-01-27 15:17:29.659 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:29 compute-0 podman[201073]: time="2026-01-27T15:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:17:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:17:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4388 "" "Go-http-client/1.1"
Jan 27 15:17:30 compute-0 nova_compute[185191]: 2026-01-27 15:17:30.404 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:31 compute-0 nova_compute[185191]: 2026-01-27 15:17:31.243 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Updating instance_info_cache with network_info: [{"id": "62a0d85c-d24f-4ada-af0a-2b902803778f", "address": "fa:16:3e:f3:86:b3", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.246", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a0d85c-d2", "ovs_interfaceid": "62a0d85c-d24f-4ada-af0a-2b902803778f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:17:31 compute-0 nova_compute[185191]: 2026-01-27 15:17:31.269 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:17:31 compute-0 nova_compute[185191]: 2026-01-27 15:17:31.270 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:17:31 compute-0 nova_compute[185191]: 2026-01-27 15:17:31.271 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:17:31 compute-0 nova_compute[185191]: 2026-01-27 15:17:31.272 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:17:31 compute-0 nova_compute[185191]: 2026-01-27 15:17:31.273 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:17:31 compute-0 nova_compute[185191]: 2026-01-27 15:17:31.273 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:17:31 compute-0 openstack_network_exporter[204239]: ERROR   15:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:17:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:17:31 compute-0 openstack_network_exporter[204239]: ERROR   15:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:17:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:17:32 compute-0 podman[241419]: 2026-01-27 15:17:32.344620666 +0000 UTC m=+0.093737166 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 27 15:17:34 compute-0 nova_compute[185191]: 2026-01-27 15:17:34.662 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:35 compute-0 podman[241443]: 2026-01-27 15:17:35.357144723 +0000 UTC m=+0.091197186 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., name=ubi9-minimal)
Jan 27 15:17:35 compute-0 podman[241436]: 2026-01-27 15:17:35.367113776 +0000 UTC m=+0.120145890 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, managed_by=edpm_ansible)
Jan 27 15:17:35 compute-0 nova_compute[185191]: 2026-01-27 15:17:35.407 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:35 compute-0 podman[241437]: 2026-01-27 15:17:35.410306742 +0000 UTC m=+0.154129154 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 15:17:39 compute-0 nova_compute[185191]: 2026-01-27 15:17:39.664 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:40 compute-0 nova_compute[185191]: 2026-01-27 15:17:40.412 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:44 compute-0 podman[241498]: 2026-01-27 15:17:44.311606072 +0000 UTC m=+0.068622765 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi)
Jan 27 15:17:44 compute-0 nova_compute[185191]: 2026-01-27 15:17:44.667 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:45 compute-0 nova_compute[185191]: 2026-01-27 15:17:45.413 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:48 compute-0 podman[241519]: 2026-01-27 15:17:48.317192725 +0000 UTC m=+0.073562171 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:17:48 compute-0 podman[241518]: 2026-01-27 15:17:48.365476831 +0000 UTC m=+0.120081529 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.openshift.expose-services=, config_id=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, container_name=kepler, release=1214.1726694543, vendor=Red Hat, Inc.)
Jan 27 15:17:49 compute-0 nova_compute[185191]: 2026-01-27 15:17:49.669 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:50 compute-0 nova_compute[185191]: 2026-01-27 15:17:50.415 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:54 compute-0 podman[241563]: 2026-01-27 15:17:54.321687111 +0000 UTC m=+0.077486889 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:17:54 compute-0 nova_compute[185191]: 2026-01-27 15:17:54.676 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:55 compute-0 nova_compute[185191]: 2026-01-27 15:17:55.419 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:59 compute-0 nova_compute[185191]: 2026-01-27 15:17:59.679 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:17:59 compute-0 podman[201073]: time="2026-01-27T15:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:17:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:17:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Jan 27 15:18:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:00.229 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:18:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:00.229 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:18:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:00.230 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:18:00 compute-0 nova_compute[185191]: 2026-01-27 15:18:00.423 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:01 compute-0 openstack_network_exporter[204239]: ERROR   15:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:18:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:18:01 compute-0 openstack_network_exporter[204239]: ERROR   15:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:18:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:18:03 compute-0 podman[241591]: 2026-01-27 15:18:03.334678707 +0000 UTC m=+0.086780814 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 27 15:18:04 compute-0 nova_compute[185191]: 2026-01-27 15:18:04.682 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:05 compute-0 nova_compute[185191]: 2026-01-27 15:18:05.426 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:06 compute-0 podman[241610]: 2026-01-27 15:18:06.336510835 +0000 UTC m=+0.080278396 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Jan 27 15:18:06 compute-0 podman[241612]: 2026-01-27 15:18:06.374161149 +0000 UTC m=+0.104794229 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 27 15:18:06 compute-0 podman[241611]: 2026-01-27 15:18:06.379872786 +0000 UTC m=+0.118551437 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Jan 27 15:18:09 compute-0 nova_compute[185191]: 2026-01-27 15:18:09.683 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:10 compute-0 nova_compute[185191]: 2026-01-27 15:18:10.428 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:14 compute-0 nova_compute[185191]: 2026-01-27 15:18:14.686 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:14 compute-0 podman[241668]: 2026-01-27 15:18:14.745049881 +0000 UTC m=+0.065334565 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 15:18:15 compute-0 nova_compute[185191]: 2026-01-27 15:18:15.429 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:19 compute-0 podman[241688]: 2026-01-27 15:18:19.315726012 +0000 UTC m=+0.067271848 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:18:19 compute-0 podman[241687]: 2026-01-27 15:18:19.343160446 +0000 UTC m=+0.100441660 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=kepler, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible)
Jan 27 15:18:19 compute-0 nova_compute[185191]: 2026-01-27 15:18:19.689 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:20 compute-0 nova_compute[185191]: 2026-01-27 15:18:20.432 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:24 compute-0 nova_compute[185191]: 2026-01-27 15:18:24.692 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:25 compute-0 podman[241729]: 2026-01-27 15:18:25.351386595 +0000 UTC m=+0.100489871 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:18:25 compute-0 nova_compute[185191]: 2026-01-27 15:18:25.435 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:25 compute-0 nova_compute[185191]: 2026-01-27 15:18:25.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:18:25 compute-0 nova_compute[185191]: 2026-01-27 15:18:25.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:18:25 compute-0 nova_compute[185191]: 2026-01-27 15:18:25.979 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.014 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.015 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.016 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.016 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.115 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.176 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.177 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.234 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.235 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.294 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.295 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.368 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.374 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.435 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.436 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.512 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.514 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.568 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.570 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.628 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.635 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.690 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.692 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.749 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.750 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.810 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.812 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:26 compute-0 nova_compute[185191]: 2026-01-27 15:18:26.874 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:27 compute-0 nova_compute[185191]: 2026-01-27 15:18:27.199 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:18:27 compute-0 nova_compute[185191]: 2026-01-27 15:18:27.201 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4868MB free_disk=72.37947845458984GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:18:27 compute-0 nova_compute[185191]: 2026-01-27 15:18:27.201 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:18:27 compute-0 nova_compute[185191]: 2026-01-27 15:18:27.202 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:18:27 compute-0 nova_compute[185191]: 2026-01-27 15:18:27.576 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:18:27 compute-0 nova_compute[185191]: 2026-01-27 15:18:27.576 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance b98b01bd-8dfe-4188-be2f-ebffe0bd1717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:18:27 compute-0 nova_compute[185191]: 2026-01-27 15:18:27.577 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:18:27 compute-0 nova_compute[185191]: 2026-01-27 15:18:27.577 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:18:27 compute-0 nova_compute[185191]: 2026-01-27 15:18:27.578 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:18:27 compute-0 nova_compute[185191]: 2026-01-27 15:18:27.735 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:18:27 compute-0 nova_compute[185191]: 2026-01-27 15:18:27.776 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:18:27 compute-0 nova_compute[185191]: 2026-01-27 15:18:27.778 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:18:27 compute-0 nova_compute[185191]: 2026-01-27 15:18:27.778 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:18:28 compute-0 nova_compute[185191]: 2026-01-27 15:18:28.743 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:18:28 compute-0 nova_compute[185191]: 2026-01-27 15:18:28.744 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:18:28 compute-0 nova_compute[185191]: 2026-01-27 15:18:28.745 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:18:28 compute-0 nova_compute[185191]: 2026-01-27 15:18:28.746 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:18:28 compute-0 nova_compute[185191]: 2026-01-27 15:18:28.947 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:18:28 compute-0 nova_compute[185191]: 2026-01-27 15:18:28.948 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:18:28 compute-0 nova_compute[185191]: 2026-01-27 15:18:28.948 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:18:29 compute-0 nova_compute[185191]: 2026-01-27 15:18:29.344 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:18:29 compute-0 nova_compute[185191]: 2026-01-27 15:18:29.345 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:18:29 compute-0 nova_compute[185191]: 2026-01-27 15:18:29.346 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:18:29 compute-0 nova_compute[185191]: 2026-01-27 15:18:29.347 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:18:29 compute-0 nova_compute[185191]: 2026-01-27 15:18:29.693 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:29 compute-0 podman[201073]: time="2026-01-27T15:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:18:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:18:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 27 15:18:30 compute-0 nova_compute[185191]: 2026-01-27 15:18:30.440 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:31 compute-0 openstack_network_exporter[204239]: ERROR   15:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:18:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:18:31 compute-0 openstack_network_exporter[204239]: ERROR   15:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:18:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:18:32 compute-0 nova_compute[185191]: 2026-01-27 15:18:32.408 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updating instance_info_cache with network_info: [{"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:18:32 compute-0 nova_compute[185191]: 2026-01-27 15:18:32.431 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:18:32 compute-0 nova_compute[185191]: 2026-01-27 15:18:32.432 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:18:32 compute-0 nova_compute[185191]: 2026-01-27 15:18:32.433 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:18:32 compute-0 nova_compute[185191]: 2026-01-27 15:18:32.433 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:18:32 compute-0 nova_compute[185191]: 2026-01-27 15:18:32.433 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:18:32 compute-0 nova_compute[185191]: 2026-01-27 15:18:32.438 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:32 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:32.439 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:18:32 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:32.439 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:18:34 compute-0 podman[241788]: 2026-01-27 15:18:34.307000516 +0000 UTC m=+0.066215709 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:18:34 compute-0 nova_compute[185191]: 2026-01-27 15:18:34.695 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:35 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:35.441 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:18:35 compute-0 nova_compute[185191]: 2026-01-27 15:18:35.444 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:37 compute-0 podman[241808]: 2026-01-27 15:18:37.342528068 +0000 UTC m=+0.095534925 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS)
Jan 27 15:18:37 compute-0 podman[241810]: 2026-01-27 15:18:37.360746858 +0000 UTC m=+0.102115085 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:18:37 compute-0 podman[241809]: 2026-01-27 15:18:37.390734212 +0000 UTC m=+0.140309664 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 15:18:39 compute-0 nova_compute[185191]: 2026-01-27 15:18:39.697 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:39 compute-0 nova_compute[185191]: 2026-01-27 15:18:39.855 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "d855a654-d263-4516-8382-efa129798a0d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:18:39 compute-0 nova_compute[185191]: 2026-01-27 15:18:39.856 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "d855a654-d263-4516-8382-efa129798a0d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:18:39 compute-0 nova_compute[185191]: 2026-01-27 15:18:39.896 185195 DEBUG nova.compute.manager [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.055 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.056 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.066 185195 DEBUG nova.virt.hardware [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.067 185195 INFO nova.compute.claims [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Claim successful on node compute-0.ctlplane.example.com
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.303 185195 DEBUG nova.compute.provider_tree [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.319 185195 DEBUG nova.scheduler.client.report [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.345 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.346 185195 DEBUG nova.compute.manager [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.400 185195 DEBUG nova.compute.manager [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.401 185195 DEBUG nova.network.neutron [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.429 185195 INFO nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.447 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.494 185195 DEBUG nova.compute.manager [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.614 185195 DEBUG nova.compute.manager [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.623 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.625 185195 INFO nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Creating image(s)
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.626 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "/var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.626 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.628 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.647 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.709 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.710 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.711 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.724 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.781 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:40 compute-0 nova_compute[185191]: 2026-01-27 15:18:40.782 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9,backing_fmt=raw /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.096 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9,backing_fmt=raw /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk 1073741824" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.098 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.387s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.099 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.156 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.157 185195 DEBUG nova.virt.disk.api [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Checking if we can resize image /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.158 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.216 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.218 185195 DEBUG nova.virt.disk.api [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Cannot resize image /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.218 185195 DEBUG nova.objects.instance [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lazy-loading 'migration_context' on Instance uuid d855a654-d263-4516-8382-efa129798a0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.247 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "/var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.248 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.249 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.262 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.320 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.321 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.322 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.334 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.390 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.400 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.628 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 1073741824" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.629 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.630 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.723 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.725 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.725 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Ensure instance console log exists: /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.726 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.727 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:18:41 compute-0 nova_compute[185191]: 2026-01-27 15:18:41.727 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:18:43 compute-0 nova_compute[185191]: 2026-01-27 15:18:43.305 185195 DEBUG nova.network.neutron [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Successfully updated port: 2bcdea5a-f4b9-4e61-9a89-5af70265faba _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 15:18:43 compute-0 nova_compute[185191]: 2026-01-27 15:18:43.353 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:18:43 compute-0 nova_compute[185191]: 2026-01-27 15:18:43.354 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquired lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:18:43 compute-0 nova_compute[185191]: 2026-01-27 15:18:43.355 185195 DEBUG nova.network.neutron [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:18:43 compute-0 nova_compute[185191]: 2026-01-27 15:18:43.436 185195 DEBUG nova.compute.manager [req-8a28310b-5107-4575-8cfc-8b8f8ee07715 req-10c3da3a-5868-47c5-aeef-499fd77b47a3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Received event network-changed-2bcdea5a-f4b9-4e61-9a89-5af70265faba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:18:43 compute-0 nova_compute[185191]: 2026-01-27 15:18:43.437 185195 DEBUG nova.compute.manager [req-8a28310b-5107-4575-8cfc-8b8f8ee07715 req-10c3da3a-5868-47c5-aeef-499fd77b47a3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Refreshing instance network info cache due to event network-changed-2bcdea5a-f4b9-4e61-9a89-5af70265faba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:18:43 compute-0 nova_compute[185191]: 2026-01-27 15:18:43.437 185195 DEBUG oslo_concurrency.lockutils [req-8a28310b-5107-4575-8cfc-8b8f8ee07715 req-10c3da3a-5868-47c5-aeef-499fd77b47a3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:18:43 compute-0 nova_compute[185191]: 2026-01-27 15:18:43.924 185195 DEBUG nova.network.neutron [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 15:18:44 compute-0 nova_compute[185191]: 2026-01-27 15:18:44.700 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:44 compute-0 nova_compute[185191]: 2026-01-27 15:18:44.945 185195 DEBUG nova.network.neutron [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Updating instance_info_cache with network_info: [{"id": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "address": "fa:16:3e:36:7d:58", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bcdea5a-f4", "ovs_interfaceid": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:18:44 compute-0 nova_compute[185191]: 2026-01-27 15:18:44.974 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Releasing lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:18:44 compute-0 nova_compute[185191]: 2026-01-27 15:18:44.976 185195 DEBUG nova.compute.manager [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Instance network_info: |[{"id": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "address": "fa:16:3e:36:7d:58", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bcdea5a-f4", "ovs_interfaceid": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 15:18:44 compute-0 nova_compute[185191]: 2026-01-27 15:18:44.978 185195 DEBUG oslo_concurrency.lockutils [req-8a28310b-5107-4575-8cfc-8b8f8ee07715 req-10c3da3a-5868-47c5-aeef-499fd77b47a3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:18:44 compute-0 nova_compute[185191]: 2026-01-27 15:18:44.980 185195 DEBUG nova.network.neutron [req-8a28310b-5107-4575-8cfc-8b8f8ee07715 req-10c3da3a-5868-47c5-aeef-499fd77b47a3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Refreshing network info cache for port 2bcdea5a-f4b9-4e61-9a89-5af70265faba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:18:44 compute-0 nova_compute[185191]: 2026-01-27 15:18:44.987 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Start _get_guest_xml network_info=[{"id": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "address": "fa:16:3e:36:7d:58", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bcdea5a-f4", "ovs_interfaceid": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T15:08:48Z,direct_url=<?>,disk_format='qcow2',id=2b336e4b-c98e-4b97-9f8f-b3290e6b6caf,min_disk=0,min_ram=0,name='cirros',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T15:08:49Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}], 'ephemerals': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'size': 1, 'guest_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 15:18:44 compute-0 nova_compute[185191]: 2026-01-27 15:18:44.997 185195 WARNING nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.012 185195 DEBUG nova.virt.libvirt.host [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.013 185195 DEBUG nova.virt.libvirt.host [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.020 185195 DEBUG nova.virt.libvirt.host [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.022 185195 DEBUG nova.virt.libvirt.host [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.023 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.024 185195 DEBUG nova.virt.hardware [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T15:08:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='26a24ace-a5af-47b3-9314-7d2b9e74c6b8',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T15:08:48Z,direct_url=<?>,disk_format='qcow2',id=2b336e4b-c98e-4b97-9f8f-b3290e6b6caf,min_disk=0,min_ram=0,name='cirros',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T15:08:49Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.026 185195 DEBUG nova.virt.hardware [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.027 185195 DEBUG nova.virt.hardware [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.028 185195 DEBUG nova.virt.hardware [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.029 185195 DEBUG nova.virt.hardware [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.030 185195 DEBUG nova.virt.hardware [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.032 185195 DEBUG nova.virt.hardware [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.033 185195 DEBUG nova.virt.hardware [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.034 185195 DEBUG nova.virt.hardware [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.035 185195 DEBUG nova.virt.hardware [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.036 185195 DEBUG nova.virt.hardware [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.044 185195 DEBUG nova.virt.libvirt.vif [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:18:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj',id=4,image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='92e45285-9077-420c-bb23-df5c16dca6b3'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd88ca4062da4fb9bedb3a0002a43c12',ramdisk_id='',reservation_id='r-tvamciqz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:18:40Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00Mzk5NTY2ODcxMzExNzEwNzQxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQzOTk1NjY4NzEzMTE3MTA3NDE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDM5OTU2Njg3MTMxMTcxMDc0MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQzOTk1NjY4NzEzMTE3MTA3NDE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00Mzk5NTY2ODcxMzExNzEwNzQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00Mzk5NTY2ODcxMzExNzEwNzQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 27 15:18:45 compute-0 nova_compute[185191]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NDM5OTU2Njg3MTMxMTcxMDc0MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTQzOTk1NjY4NzEzMTE3MTA3NDE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT00Mzk5NTY2ODcxMzExNzEwNzQxPT0tLQo=',user_id='24260fb24da44b10b598f9c822c026b8',uuid=d855a654-d263-4516-8382-efa129798a0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "address": "fa:16:3e:36:7d:58", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bcdea5a-f4", "ovs_interfaceid": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.045 185195 DEBUG nova.network.os_vif_util [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converting VIF {"id": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "address": "fa:16:3e:36:7d:58", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bcdea5a-f4", "ovs_interfaceid": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.046 185195 DEBUG nova.network.os_vif_util [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:7d:58,bridge_name='br-int',has_traffic_filtering=True,id=2bcdea5a-f4b9-4e61-9a89-5af70265faba,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2bcdea5a-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.048 185195 DEBUG nova.objects.instance [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lazy-loading 'pci_devices' on Instance uuid d855a654-d263-4516-8382-efa129798a0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.067 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] End _get_guest_xml xml=<domain type="kvm">
Jan 27 15:18:45 compute-0 nova_compute[185191]:   <uuid>d855a654-d263-4516-8382-efa129798a0d</uuid>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   <name>instance-00000004</name>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   <memory>524288</memory>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   <vcpu>1</vcpu>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   <metadata>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <nova:name>vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj</nova:name>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <nova:creationTime>2026-01-27 15:18:44</nova:creationTime>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <nova:flavor name="m1.small">
Jan 27 15:18:45 compute-0 nova_compute[185191]:         <nova:memory>512</nova:memory>
Jan 27 15:18:45 compute-0 nova_compute[185191]:         <nova:disk>1</nova:disk>
Jan 27 15:18:45 compute-0 nova_compute[185191]:         <nova:swap>0</nova:swap>
Jan 27 15:18:45 compute-0 nova_compute[185191]:         <nova:ephemeral>1</nova:ephemeral>
Jan 27 15:18:45 compute-0 nova_compute[185191]:         <nova:vcpus>1</nova:vcpus>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       </nova:flavor>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <nova:owner>
Jan 27 15:18:45 compute-0 nova_compute[185191]:         <nova:user uuid="24260fb24da44b10b598f9c822c026b8">admin</nova:user>
Jan 27 15:18:45 compute-0 nova_compute[185191]:         <nova:project uuid="dd88ca4062da4fb9bedb3a0002a43c12">admin</nova:project>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       </nova:owner>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <nova:root type="image" uuid="2b336e4b-c98e-4b97-9f8f-b3290e6b6caf"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <nova:ports>
Jan 27 15:18:45 compute-0 nova_compute[185191]:         <nova:port uuid="2bcdea5a-f4b9-4e61-9a89-5af70265faba">
Jan 27 15:18:45 compute-0 nova_compute[185191]:           <nova:ip type="fixed" address="192.168.0.20" ipVersion="4"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:         </nova:port>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       </nova:ports>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     </nova:instance>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   </metadata>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   <sysinfo type="smbios">
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <system>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <entry name="manufacturer">RDO</entry>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <entry name="product">OpenStack Compute</entry>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <entry name="serial">d855a654-d263-4516-8382-efa129798a0d</entry>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <entry name="uuid">d855a654-d263-4516-8382-efa129798a0d</entry>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <entry name="family">Virtual Machine</entry>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     </system>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   </sysinfo>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   <os>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <boot dev="hd"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <smbios mode="sysinfo"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   </os>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   <features>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <acpi/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <apic/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <vmcoreinfo/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   </features>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   <clock offset="utc">
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <timer name="hpet" present="no"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   </clock>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   <cpu mode="host-model" match="exact">
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <target dev="vda" bus="virtio"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <target dev="vdb" bus="virtio"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <disk type="file" device="cdrom">
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.config"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <target dev="sda" bus="sata"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <interface type="ethernet">
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <mac address="fa:16:3e:36:7d:58"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <mtu size="1442"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <target dev="tap2bcdea5a-f4"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <serial type="pty">
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <log file="/var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/console.log" append="off"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     </serial>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <video>
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     </video>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <input type="tablet" bus="usb"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <rng model="virtio">
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <backend model="random">/dev/urandom</backend>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <controller type="usb" index="0"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     <memballoon model="virtio">
Jan 27 15:18:45 compute-0 nova_compute[185191]:       <stats period="10"/>
Jan 27 15:18:45 compute-0 nova_compute[185191]:     </memballoon>
Jan 27 15:18:45 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:18:45 compute-0 nova_compute[185191]: </domain>
Jan 27 15:18:45 compute-0 nova_compute[185191]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.079 185195 DEBUG nova.compute.manager [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Preparing to wait for external event network-vif-plugged-2bcdea5a-f4b9-4e61-9a89-5af70265faba prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.079 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "d855a654-d263-4516-8382-efa129798a0d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.080 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "d855a654-d263-4516-8382-efa129798a0d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.080 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "d855a654-d263-4516-8382-efa129798a0d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.081 185195 DEBUG nova.virt.libvirt.vif [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:18:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj',id=4,image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='92e45285-9077-420c-bb23-df5c16dca6b3'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd88ca4062da4fb9bedb3a0002a43c12',ramdisk_id='',reservation_id='r-tvamciqz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:18:40Z,user_data='Content-Type: multipart/mixed; boundary="===============4399566871311710741=="
MIME-Version: 1.0

--===============4399566871311710741==
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config"



# Capture all subprocess output into a logfile
# Useful for troubleshooting cloud-init issues
output: {all: '| tee -a /var/log/cloud-init-output.log'}

--===============4399566871311710741==
Content-Type: text/cloud-boothook; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="boothook.sh"

#!/usr/bin/bash

# FIXME(shadower) this is a workaround for cloud-init 0.6.3 present in Ubuntu
# 12.04 LTS:
# https://bugs.launchpad.net/heat/+bug/1257410
#
# The old cloud-init doesn't create the users directly so the commands to do
# this are injected though nova_utils.py.
#
# Once we drop support for 0.6.3, we can safely remove this.


# in case heat-cfntools has been installed from package but no symlinks
# are yet in /opt/aws/bin/
cfn-create-aws-symlinks

# Do not remove - the cloud boothook should always return success
exit 0

--===============4399566871311710741==
Content-Type: text/part-handler; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="part-handler.py"

# part-handler
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import os
import sys


def list_types():
    return ["text/x-cfninitdata"]


def handle_part(data, ctype, filename, payload):
    if ctype == "__begin__":
        try:
            os.makedirs('/var/lib/heat-cfntools', int("700", 8))
        except OSError:
            ex_type, e, tb = sys.exc_info()
            if e.errno != errno.EEXIST:
                raise
        return

    if ctype == "__end__":
        return

    timestamp = datetime.datetime.now()
    with open('/var/log/part-handler.log', 'a') as log:
        log.write('%s filename:%s, ctype:%s\n' % (timestamp, filename, ctype))

    if ctype == 'text/x-cfninitdata':
        with open('/var/lib/heat-cfntools/%s' % filename, 'w') as f:
            f.write(payload)

        # TODO(sdake) hopefully temporary until users move to heat-cfntools-1.3
        with open('/var/lib/cloud/data/%s' % filename, 'w') as f:
            f.write(payload)

--===============4399566871311710741==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-userdata"


--===============4399566871311710741==
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="loguserdata.py"

#!/usr/bin/env python3
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import logging
import os
import subprocess
import sys


VAR_PATH = '/var/lib/heat-cfntools'
LOG = logging.getLogger('heat-provision')


def init_logging():
    LOG.setLevel(logging.INFO)
    LOG.addHandler(logging.StreamHandler())
    fh = logging.FileHandler("/var/log/heat-provision.log")
    os.chmod(fh.baseFilename, int("600", 8))
    LOG.addHandler(fh)


def call(args):

    class LogStream(object):

        def write(self, data):
            LOG.info(data)

    LOG.info('%s\n', ' '.join(args))  # noqa
    try:
        ls = LogStream()
        p = subprocess.Popen(args, stdout=subprocess.PIPE,
                             stderr=subprocess.PIPE)
        data = p.communicate()
        if data:
            for x in data:
                ls.write(x)
    except OSError:
        ex_type, ex, tb = sys.exc_info()
        if ex.errno == errno.ENOEXEC:
            LOG.error('Userdata empty or not executable: %s', ex)
            return os.EX_OK
        else:
            LOG.error('OS error running userdata: %s', ex)
            return os.EX_OSERR
    except Exception:
        ex_type, ex, tb = sys.exc_info()
        LOG.error('Unknown error running userdata: %s', ex)
        return os.EX_SOFTWARE
    return p.returncode


def main():
    userdata_path = os.path.join(VAR_PATH, 'cfn-userdata')
    os.chmod(userdata_path, int("700", 8))

    LOG.info('Provision began: %s', datetime.datetime.now())
    returncode = call([userdata_path])
    LOG.info('Provision done: %s', datetime.datetime.now())
    if returncode:
        return returncode


if __name__ == '__main__':
    init_logging()

    code = main()
    if code:
        LOG.error('Provision failed with exit code %s', code)
        sys.exit(code)

    provision_log = os.path.join(VAR_PATH, 'provision-finished')
    # touch the file so it is timestamped with when finished
    with open(provision_log, 'a'):
        os.utime(provision_log, None)

--===============4399566871311710741==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-metadata-server"

https://heat-cfnapi-internal.openstack.svc:8000/v1/
--===============4399566871311710741==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-boto-cfg"

[Boto]
debug = 0
is_secure = 0
https_validate_certificates = 1
cfn_region_name = heat
cfn_region_endpoint = heat-cfnapi-internal.openstack.svc
--===============4399566871311710741==--
',user_id='24260fb24da44b10b598f9c822c026b8',uuid=d855a654-d263-4516-8382-efa129798a0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "address": "fa:16:3e:36:7d:58", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bcdea5a-f4", "ovs_interfaceid": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.081 185195 DEBUG nova.network.os_vif_util [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converting VIF {"id": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "address": "fa:16:3e:36:7d:58", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bcdea5a-f4", "ovs_interfaceid": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.081 185195 DEBUG nova.network.os_vif_util [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:7d:58,bridge_name='br-int',has_traffic_filtering=True,id=2bcdea5a-f4b9-4e61-9a89-5af70265faba,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2bcdea5a-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.082 185195 DEBUG os_vif [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:7d:58,bridge_name='br-int',has_traffic_filtering=True,id=2bcdea5a-f4b9-4e61-9a89-5af70265faba,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2bcdea5a-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.084 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.085 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.085 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.089 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.090 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2bcdea5a-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.091 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2bcdea5a-f4, col_values=(('external_ids', {'iface-id': '2bcdea5a-f4b9-4e61-9a89-5af70265faba', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:7d:58', 'vm-uuid': 'd855a654-d263-4516-8382-efa129798a0d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.093 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.095 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:18:45 compute-0 NetworkManager[56090]: <info>  [1769527125.0959] manager: (tap2bcdea5a-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.104 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.106 185195 INFO os_vif [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:7d:58,bridge_name='br-int',has_traffic_filtering=True,id=2bcdea5a-f4b9-4e61-9a89-5af70265faba,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2bcdea5a-f4')
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.169 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.170 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.170 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.171 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No VIF found with MAC fa:16:3e:36:7d:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.172 185195 INFO nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Using config drive
Jan 27 15:18:45 compute-0 rsyslogd[235702]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 15:18:45.044 185195 DEBUG nova.virt.libvirt.vif [None req-55050d7f-5f [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 15:18:45 compute-0 podman[241903]: 2026-01-27 15:18:45.329277433 +0000 UTC m=+0.080923584 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.451 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.608 185195 INFO nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Creating config drive at /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.config
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.616 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9i48024r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.747 185195 DEBUG oslo_concurrency.processutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9i48024r" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:18:45 compute-0 kernel: tap2bcdea5a-f4: entered promiscuous mode
Jan 27 15:18:45 compute-0 NetworkManager[56090]: <info>  [1769527125.8492] manager: (tap2bcdea5a-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.848 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.857 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:45 compute-0 ovn_controller[97541]: 2026-01-27T15:18:45Z|00045|binding|INFO|Claiming lport 2bcdea5a-f4b9-4e61-9a89-5af70265faba for this chassis.
Jan 27 15:18:45 compute-0 ovn_controller[97541]: 2026-01-27T15:18:45Z|00046|binding|INFO|2bcdea5a-f4b9-4e61-9a89-5af70265faba: Claiming fa:16:3e:36:7d:58 192.168.0.20
Jan 27 15:18:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:45.866 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:7d:58 192.168.0.20'], port_security=['fa:16:3e:36:7d:58 192.168.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-smi76etv33tn-v6zwyh72ilg3-valx525hdwsf-port-mbhcac6i36zf', 'neutron:cidrs': '192.168.0.20/24', 'neutron:device_id': 'd855a654-d263-4516-8382-efa129798a0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7e37fe5-6354-4f61-95d0-78632be96811', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-smi76etv33tn-v6zwyh72ilg3-valx525hdwsf-port-mbhcac6i36zf', 'neutron:project_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'neutron:revision_number': '2', 'neutron:security_group_ids': '812ec3a5-800e-4a9a-a5c1-7429aedf7716', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=764c6ac9-6147-480d-b23c-048fbe883747, chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=2bcdea5a-f4b9-4e61-9a89-5af70265faba) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:18:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:45.868 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 2bcdea5a-f4b9-4e61-9a89-5af70265faba in datapath d7e37fe5-6354-4f61-95d0-78632be96811 bound to our chassis
Jan 27 15:18:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:45.869 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7e37fe5-6354-4f61-95d0-78632be96811
Jan 27 15:18:45 compute-0 ovn_controller[97541]: 2026-01-27T15:18:45Z|00047|binding|INFO|Setting lport 2bcdea5a-f4b9-4e61-9a89-5af70265faba ovn-installed in OVS
Jan 27 15:18:45 compute-0 ovn_controller[97541]: 2026-01-27T15:18:45Z|00048|binding|INFO|Setting lport 2bcdea5a-f4b9-4e61-9a89-5af70265faba up in Southbound
Jan 27 15:18:45 compute-0 nova_compute[185191]: 2026-01-27 15:18:45.882 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:45.895 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea0962b-5bc1-46da-b2ce-1595c77e0e6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:18:45 compute-0 systemd-machined[156506]: New machine qemu-4-instance-00000004.
Jan 27 15:18:45 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Jan 27 15:18:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:45.933 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[eba1cee6-b3ad-4d08-8231-349a0fde6b66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:18:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:45.937 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[000bdc62-f33a-4b0b-8d7e-6b1ea0fc9413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:18:45 compute-0 systemd-udevd[241945]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:18:45 compute-0 NetworkManager[56090]: <info>  [1769527125.9603] device (tap2bcdea5a-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 15:18:45 compute-0 NetworkManager[56090]: <info>  [1769527125.9659] device (tap2bcdea5a-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 15:18:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:45.971 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[e689277b-ae60-4d9c-b7f4-1092d1939ac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:18:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:45.994 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[34667531-f8a4-462f-b3f5-1866bc401183]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7e37fe5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:72:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 574, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 574, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 420463, 'reachable_time': 36147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241950, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:18:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:46.011 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5f27f5-d313-4dff-a54b-4792c8a59b1a]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapd7e37fe5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420478, 'tstamp': 420478}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241955, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7e37fe5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420481, 'tstamp': 420481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241955, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:18:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:46.014 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7e37fe5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.016 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.017 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:46.017 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7e37fe5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:18:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:46.017 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:18:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:46.018 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7e37fe5-60, col_values=(('external_ids', {'iface-id': 'd4262905-2cdc-4929-a155-db8204d90ca2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:18:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:18:46.018 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.403 185195 DEBUG nova.compute.manager [req-316c38b7-000e-482a-8e11-5cb1ba7a4a75 req-eccb2bd2-aa46-4aea-8a8a-10c03bd9bb4d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Received event network-vif-plugged-2bcdea5a-f4b9-4e61-9a89-5af70265faba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.404 185195 DEBUG oslo_concurrency.lockutils [req-316c38b7-000e-482a-8e11-5cb1ba7a4a75 req-eccb2bd2-aa46-4aea-8a8a-10c03bd9bb4d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "d855a654-d263-4516-8382-efa129798a0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.404 185195 DEBUG oslo_concurrency.lockutils [req-316c38b7-000e-482a-8e11-5cb1ba7a4a75 req-eccb2bd2-aa46-4aea-8a8a-10c03bd9bb4d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "d855a654-d263-4516-8382-efa129798a0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.405 185195 DEBUG oslo_concurrency.lockutils [req-316c38b7-000e-482a-8e11-5cb1ba7a4a75 req-eccb2bd2-aa46-4aea-8a8a-10c03bd9bb4d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "d855a654-d263-4516-8382-efa129798a0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.405 185195 DEBUG nova.compute.manager [req-316c38b7-000e-482a-8e11-5cb1ba7a4a75 req-eccb2bd2-aa46-4aea-8a8a-10c03bd9bb4d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Processing event network-vif-plugged-2bcdea5a-f4b9-4e61-9a89-5af70265faba _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.698 185195 DEBUG nova.network.neutron [req-8a28310b-5107-4575-8cfc-8b8f8ee07715 req-10c3da3a-5868-47c5-aeef-499fd77b47a3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Updated VIF entry in instance network info cache for port 2bcdea5a-f4b9-4e61-9a89-5af70265faba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.699 185195 DEBUG nova.network.neutron [req-8a28310b-5107-4575-8cfc-8b8f8ee07715 req-10c3da3a-5868-47c5-aeef-499fd77b47a3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Updating instance_info_cache with network_info: [{"id": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "address": "fa:16:3e:36:7d:58", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bcdea5a-f4", "ovs_interfaceid": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.721 185195 DEBUG oslo_concurrency.lockutils [req-8a28310b-5107-4575-8cfc-8b8f8ee07715 req-10c3da3a-5868-47c5-aeef-499fd77b47a3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.770 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769527126.7690673, d855a654-d263-4516-8382-efa129798a0d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.771 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] VM Started (Lifecycle Event)
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.773 185195 DEBUG nova.compute.manager [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.778 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.784 185195 INFO nova.virt.libvirt.driver [-] [instance: d855a654-d263-4516-8382-efa129798a0d] Instance spawned successfully.
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.784 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.791 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.796 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.815 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.815 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.816 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.817 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.818 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.818 185195 DEBUG nova.virt.libvirt.driver [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.823 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.823 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769527126.769185, d855a654-d263-4516-8382-efa129798a0d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.824 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] VM Paused (Lifecycle Event)
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.871 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.878 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769527126.7777917, d855a654-d263-4516-8382-efa129798a0d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.879 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] VM Resumed (Lifecycle Event)
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.912 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.919 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.931 185195 INFO nova.compute.manager [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Took 6.31 seconds to spawn the instance on the hypervisor.
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.931 185195 DEBUG nova.compute.manager [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:18:46 compute-0 nova_compute[185191]: 2026-01-27 15:18:46.945 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:18:47 compute-0 nova_compute[185191]: 2026-01-27 15:18:47.013 185195 INFO nova.compute.manager [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Took 7.00 seconds to build instance.
Jan 27 15:18:47 compute-0 nova_compute[185191]: 2026-01-27 15:18:47.031 185195 DEBUG oslo_concurrency.lockutils [None req-55050d7f-5f6b-42ec-884f-7f68ef0fee34 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "d855a654-d263-4516-8382-efa129798a0d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:18:48 compute-0 nova_compute[185191]: 2026-01-27 15:18:48.497 185195 DEBUG nova.compute.manager [req-be534c28-da14-4c38-a35a-4c4430b80051 req-20fd356d-2411-41eb-bece-7b4984c0305e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Received event network-vif-plugged-2bcdea5a-f4b9-4e61-9a89-5af70265faba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:18:48 compute-0 nova_compute[185191]: 2026-01-27 15:18:48.498 185195 DEBUG oslo_concurrency.lockutils [req-be534c28-da14-4c38-a35a-4c4430b80051 req-20fd356d-2411-41eb-bece-7b4984c0305e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "d855a654-d263-4516-8382-efa129798a0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:18:48 compute-0 nova_compute[185191]: 2026-01-27 15:18:48.499 185195 DEBUG oslo_concurrency.lockutils [req-be534c28-da14-4c38-a35a-4c4430b80051 req-20fd356d-2411-41eb-bece-7b4984c0305e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "d855a654-d263-4516-8382-efa129798a0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:18:48 compute-0 nova_compute[185191]: 2026-01-27 15:18:48.500 185195 DEBUG oslo_concurrency.lockutils [req-be534c28-da14-4c38-a35a-4c4430b80051 req-20fd356d-2411-41eb-bece-7b4984c0305e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "d855a654-d263-4516-8382-efa129798a0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:18:48 compute-0 nova_compute[185191]: 2026-01-27 15:18:48.501 185195 DEBUG nova.compute.manager [req-be534c28-da14-4c38-a35a-4c4430b80051 req-20fd356d-2411-41eb-bece-7b4984c0305e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] No waiting events found dispatching network-vif-plugged-2bcdea5a-f4b9-4e61-9a89-5af70265faba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:18:48 compute-0 nova_compute[185191]: 2026-01-27 15:18:48.502 185195 WARNING nova.compute.manager [req-be534c28-da14-4c38-a35a-4c4430b80051 req-20fd356d-2411-41eb-bece-7b4984c0305e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Received unexpected event network-vif-plugged-2bcdea5a-f4b9-4e61-9a89-5af70265faba for instance with vm_state active and task_state None.
Jan 27 15:18:50 compute-0 nova_compute[185191]: 2026-01-27 15:18:50.094 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:50 compute-0 podman[241966]: 2026-01-27 15:18:50.343307738 +0000 UTC m=+0.084507331 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:18:50 compute-0 podman[241965]: 2026-01-27 15:18:50.360797719 +0000 UTC m=+0.102984610 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, config_id=kepler, container_name=kepler, vcs-type=git, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.4, io.buildah.version=1.29.0)
Jan 27 15:18:50 compute-0 nova_compute[185191]: 2026-01-27 15:18:50.452 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:55 compute-0 nova_compute[185191]: 2026-01-27 15:18:55.097 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:55 compute-0 nova_compute[185191]: 2026-01-27 15:18:55.456 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:18:56 compute-0 podman[242005]: 2026-01-27 15:18:56.323283002 +0000 UTC m=+0.072924493 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:18:59 compute-0 podman[201073]: time="2026-01-27T15:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:18:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:18:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 27 15:19:00 compute-0 nova_compute[185191]: 2026-01-27 15:19:00.100 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:19:00.230 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:19:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:19:00.231 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:19:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:19:00.231 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:19:00 compute-0 nova_compute[185191]: 2026-01-27 15:19:00.457 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:01 compute-0 openstack_network_exporter[204239]: ERROR   15:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:19:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:19:01 compute-0 openstack_network_exporter[204239]: ERROR   15:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:19:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:19:05 compute-0 nova_compute[185191]: 2026-01-27 15:19:05.102 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:05 compute-0 podman[242027]: 2026-01-27 15:19:05.34180755 +0000 UTC m=+0.088906393 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 15:19:05 compute-0 nova_compute[185191]: 2026-01-27 15:19:05.460 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:08 compute-0 podman[242047]: 2026-01-27 15:19:08.32626929 +0000 UTC m=+0.081154710 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute)
Jan 27 15:19:08 compute-0 podman[242049]: 2026-01-27 15:19:08.34009461 +0000 UTC m=+0.082640251 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 27 15:19:08 compute-0 podman[242048]: 2026-01-27 15:19:08.379140072 +0000 UTC m=+0.121660332 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:19:10 compute-0 nova_compute[185191]: 2026-01-27 15:19:10.105 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:10 compute-0 nova_compute[185191]: 2026-01-27 15:19:10.461 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.986 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.986 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.991 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance d855a654-d263-4516-8382-efa129798a0d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:19:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:10.994 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/d855a654-d263-4516-8382-efa129798a0d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82c957adbc17ae7d91b95e243ef95edcae050b803dbf40e883e7549d3d32b40a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.012 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Tue, 27 Jan 2026 15:19:11 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-a61a7850-bfa5-4286-8da3-753c1e03cd48 x-openstack-request-id: req-a61a7850-bfa5-4286-8da3-753c1e03cd48 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.012 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "d855a654-d263-4516-8382-efa129798a0d", "name": "vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj", "status": "ACTIVE", "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "user_id": "24260fb24da44b10b598f9c822c026b8", "metadata": {"metering.server_group": "92e45285-9077-420c-bb23-df5c16dca6b3"}, "hostId": "3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb", "image": {"id": "2b336e4b-c98e-4b97-9f8f-b3290e6b6caf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/2b336e4b-c98e-4b97-9f8f-b3290e6b6caf"}]}, "flavor": {"id": "26a24ace-a5af-47b3-9314-7d2b9e74c6b8", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/26a24ace-a5af-47b3-9314-7d2b9e74c6b8"}]}, "created": "2026-01-27T15:18:38Z", "updated": "2026-01-27T15:18:46Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.20", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:36:7d:58"}, {"version": 4, "addr": "192.168.122.247", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:36:7d:58"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/d855a654-d263-4516-8382-efa129798a0d"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/d855a654-d263-4516-8382-efa129798a0d"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-27T15:18:46.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.013 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/d855a654-d263-4516-8382-efa129798a0d used request id req-a61a7850-bfa5-4286-8da3-753c1e03cd48 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.014 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd855a654-d263-4516-8382-efa129798a0d', 'name': 'vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {'metering.server_group': '92e45285-9077-420c-bb23-df5c16dca6b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.017 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '221a9a46-46a7-4a1b-ad5b-5d1eca64c106', 'name': 'vn-etv33tn-yssw4loy7awz-x7cghs3h76zg-vnf-qosycueqkm5v', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {'metering.server_group': '92e45285-9077-420c-bb23-df5c16dca6b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.021 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8c4af6eb-340b-477f-83d2-11aa7ab0b9d3', 'name': 'test_0', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.025 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b98b01bd-8dfe-4188-be2f-ebffe0bd1717', 'name': 'vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {'metering.server_group': '92e45285-9077-420c-bb23-df5c16dca6b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.025 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.026 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.026 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.026 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.027 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:19:13.026627) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.109 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.111 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.112 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.174 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.latency volume: 2049506219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.174 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.latency volume: 12506227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.175 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.235 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 3771884583 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.236 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 11291751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.236 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.304 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.latency volume: 1732074524 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.305 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.latency volume: 11551078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.305 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.366 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.367 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.368 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:19:13.367330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.368 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.369 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.369 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.369 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.370 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.370 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.371 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.371 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.371 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.requests volume: 242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.372 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.372 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.373 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.373 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.373 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.374 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.374 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:19:13.374583) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.400 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.401 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.402 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.423 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.424 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.424 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.447 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.447 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.448 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.480 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.481 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.481 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.482 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.482 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.482 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.482 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.483 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.483 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:19:13.483252) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.487 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for d855a654-d263-4516-8382-efa129798a0d / tap2bcdea5a-f4 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.487 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.490 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.493 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.496 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.498 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.498 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.498 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.498 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:19:13.498740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.499 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.500 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.500 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.501 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:19:13.501026) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.502 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.503 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:19:13.503204) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.503 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.504 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.504 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.505 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:19:13.506296) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.539 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/cpu volume: 26040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.561 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/cpu volume: 33210000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.611 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/cpu volume: 40250000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.632 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/cpu volume: 347040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.634 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.634 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.635 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.635 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.635 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:19:13.634929) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.635 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.636 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.636 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.637 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.637 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.637 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.638 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.638 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.638 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:19:13.638049) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.638 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.639 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.639 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.packets volume: 67 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.640 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.641 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:19:13.641519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.642 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance d855a654-d263-4516-8382-efa129798a0d: ceilometer.compute.pollsters.NoVolumeException
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.642 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.642 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/memory.usage volume: 48.9765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.643 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.644 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.644 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.644 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.645 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.645 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:19:13.644882) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.645 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.646 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.646 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.647 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.648 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.648 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.648 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj>]
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T15:19:13.648023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.649 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.650 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.650 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:19:13.650067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.650 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.651 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.651 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.652 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.652 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.652 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.653 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.653 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.653 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.653 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.654 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.bytes.delta volume: 1480 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.654 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:19:13.653680) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.654 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.655 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.655 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.655 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.656 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.656 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.656 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.656 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.657 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:19:13.656618) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.657 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.658 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.bytes volume: 7634 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.658 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.658 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.659 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.659 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.659 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.659 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.659 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.660 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.660 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:19:13.659641) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.660 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.660 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.661 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.661 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.661 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.662 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.662 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.662 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.663 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.663 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.664 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.664 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.664 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.664 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.664 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.665 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.665 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.665 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.665 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:19:13.665117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.666 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.666 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.667 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.667 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.667 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.667 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.667 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.668 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.668 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.668 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.668 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:19:13.668133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.669 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.669 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.669 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.670 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.670 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.671 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.671 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.671 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.672 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.672 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.673 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.673 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.673 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.673 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.673 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.674 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.674 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.674 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.bytes.delta volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.674 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:19:13.674158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.675 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.675 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.bytes.delta volume: 2672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.676 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.676 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.676 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.676 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.677 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.677 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.677 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 1868128561 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.677 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.677 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:19:13.677319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.678 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 57175340 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.678 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.latency volume: 838942513 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.679 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.latency volume: 127847454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.679 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.latency volume: 233079678 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.679 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 1242591197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.680 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 114890665 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.680 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 113913681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.680 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.latency volume: 697193681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.681 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.latency volume: 100159582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.681 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.latency volume: 227319301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.682 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.682 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.683 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.683 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.683 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T15:19:13.683352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.683 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj>]
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.684 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.685 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.685 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.685 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:19:13.685268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.685 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.686 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.686 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.687 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.687 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.687 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.687 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.688 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.688 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.688 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.689 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.689 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.690 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.690 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.690 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.691 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.691 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.691 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.691 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:19:13.691256) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.692 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.692 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.692 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.693 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.693 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.693 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.694 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.694 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.694 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.695 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.695 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.695 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.696 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.696 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.696 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.696 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.697 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.697 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.697 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:19:13.696803) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.698 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.699 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.700 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.701 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.701 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.701 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:19:13.701571) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.702 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.703 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.703 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.704 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.704 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.705 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.705 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.706 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.706 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.707 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.bytes volume: 41848832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.708 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.708 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.710 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:19:13.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:19:15 compute-0 nova_compute[185191]: 2026-01-27 15:19:15.107 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:15 compute-0 nova_compute[185191]: 2026-01-27 15:19:15.463 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:15 compute-0 ovn_controller[97541]: 2026-01-27T15:19:15Z|00049|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Jan 27 15:19:16 compute-0 podman[242108]: 2026-01-27 15:19:16.372512578 +0000 UTC m=+0.132800748 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 15:19:19 compute-0 ovn_controller[97541]: 2026-01-27T15:19:19Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:7d:58 192.168.0.20
Jan 27 15:19:19 compute-0 ovn_controller[97541]: 2026-01-27T15:19:19Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:7d:58 192.168.0.20
Jan 27 15:19:20 compute-0 nova_compute[185191]: 2026-01-27 15:19:20.113 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:20 compute-0 nova_compute[185191]: 2026-01-27 15:19:20.464 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:21 compute-0 podman[242144]: 2026-01-27 15:19:21.328166376 +0000 UTC m=+0.077328894 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:19:21 compute-0 podman[242143]: 2026-01-27 15:19:21.340994709 +0000 UTC m=+0.092382028 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, release-0.7.12=, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, version=9.4, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_id=kepler)
Jan 27 15:19:25 compute-0 nova_compute[185191]: 2026-01-27 15:19:25.119 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:25 compute-0 nova_compute[185191]: 2026-01-27 15:19:25.467 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:26 compute-0 nova_compute[185191]: 2026-01-27 15:19:26.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:19:26 compute-0 nova_compute[185191]: 2026-01-27 15:19:26.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:19:26 compute-0 nova_compute[185191]: 2026-01-27 15:19:26.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:19:26 compute-0 nova_compute[185191]: 2026-01-27 15:19:26.980 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:19:26 compute-0 nova_compute[185191]: 2026-01-27 15:19:26.981 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:19:26 compute-0 nova_compute[185191]: 2026-01-27 15:19:26.981 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:19:26 compute-0 nova_compute[185191]: 2026-01-27 15:19:26.982 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.099 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:19:27 compute-0 podman[242187]: 2026-01-27 15:19:27.154781245 +0000 UTC m=+0.101657463 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.180 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.181 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.256 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.257 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.320 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.321 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.387 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.395 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.460 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.462 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.527 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.529 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.591 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.592 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.649 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.662 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.742 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.743 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.822 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.823 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.888 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.889 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.948 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:19:27 compute-0 nova_compute[185191]: 2026-01-27 15:19:27.957 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.022 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.023 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.097 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.099 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.164 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.166 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.231 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.611 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.612 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4670MB free_disk=72.35689926147461GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.613 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.613 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.798 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.798 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance b98b01bd-8dfe-4188-be2f-ebffe0bd1717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.798 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.799 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance d855a654-d263-4516-8382-efa129798a0d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.799 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.799 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.923 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.946 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.985 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:19:28 compute-0 nova_compute[185191]: 2026-01-27 15:19:28.986 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:19:29 compute-0 podman[201073]: time="2026-01-27T15:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:19:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:19:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 27 15:19:29 compute-0 nova_compute[185191]: 2026-01-27 15:19:29.985 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:19:29 compute-0 nova_compute[185191]: 2026-01-27 15:19:29.986 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:19:29 compute-0 nova_compute[185191]: 2026-01-27 15:19:29.986 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:19:30 compute-0 nova_compute[185191]: 2026-01-27 15:19:30.122 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:30 compute-0 nova_compute[185191]: 2026-01-27 15:19:30.470 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:30 compute-0 nova_compute[185191]: 2026-01-27 15:19:30.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:19:30 compute-0 nova_compute[185191]: 2026-01-27 15:19:30.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:19:31 compute-0 openstack_network_exporter[204239]: ERROR   15:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:19:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:19:31 compute-0 openstack_network_exporter[204239]: ERROR   15:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:19:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:19:32 compute-0 nova_compute[185191]: 2026-01-27 15:19:32.004 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:19:32 compute-0 nova_compute[185191]: 2026-01-27 15:19:32.005 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:19:32 compute-0 nova_compute[185191]: 2026-01-27 15:19:32.005 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:19:35 compute-0 nova_compute[185191]: 2026-01-27 15:19:35.126 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:35 compute-0 sshd-session[242261]: Invalid user solv from 2.57.122.238 port 40554
Jan 27 15:19:35 compute-0 sshd-session[242261]: Connection closed by invalid user solv 2.57.122.238 port 40554 [preauth]
Jan 27 15:19:35 compute-0 nova_compute[185191]: 2026-01-27 15:19:35.472 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:36 compute-0 podman[242263]: 2026-01-27 15:19:36.382882301 +0000 UTC m=+0.111863743 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:19:37 compute-0 nova_compute[185191]: 2026-01-27 15:19:37.247 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Updating instance_info_cache with network_info: [{"id": "62a0d85c-d24f-4ada-af0a-2b902803778f", "address": "fa:16:3e:f3:86:b3", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.246", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a0d85c-d2", "ovs_interfaceid": "62a0d85c-d24f-4ada-af0a-2b902803778f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:19:37 compute-0 nova_compute[185191]: 2026-01-27 15:19:37.278 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:19:37 compute-0 nova_compute[185191]: 2026-01-27 15:19:37.278 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:19:37 compute-0 nova_compute[185191]: 2026-01-27 15:19:37.279 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:19:37 compute-0 nova_compute[185191]: 2026-01-27 15:19:37.279 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:19:37 compute-0 nova_compute[185191]: 2026-01-27 15:19:37.280 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:19:39 compute-0 podman[242282]: 2026-01-27 15:19:39.354138649 +0000 UTC m=+0.110348982 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 15:19:39 compute-0 podman[242284]: 2026-01-27 15:19:39.391434513 +0000 UTC m=+0.129500498 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7)
Jan 27 15:19:39 compute-0 podman[242283]: 2026-01-27 15:19:39.406518887 +0000 UTC m=+0.145767834 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 15:19:40 compute-0 nova_compute[185191]: 2026-01-27 15:19:40.131 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:40 compute-0 nova_compute[185191]: 2026-01-27 15:19:40.473 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:45 compute-0 nova_compute[185191]: 2026-01-27 15:19:45.136 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:45 compute-0 nova_compute[185191]: 2026-01-27 15:19:45.475 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:47 compute-0 podman[242346]: 2026-01-27 15:19:47.332403007 +0000 UTC m=+0.090235379 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 15:19:50 compute-0 nova_compute[185191]: 2026-01-27 15:19:50.140 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:50 compute-0 nova_compute[185191]: 2026-01-27 15:19:50.479 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:52 compute-0 podman[242368]: 2026-01-27 15:19:52.343121155 +0000 UTC m=+0.087763452 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.4)
Jan 27 15:19:52 compute-0 podman[242369]: 2026-01-27 15:19:52.361080558 +0000 UTC m=+0.096688182 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:19:55 compute-0 nova_compute[185191]: 2026-01-27 15:19:55.144 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:55 compute-0 nova_compute[185191]: 2026-01-27 15:19:55.481 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:19:57 compute-0 podman[242411]: 2026-01-27 15:19:57.347246865 +0000 UTC m=+0.106654410 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:19:59 compute-0 podman[201073]: time="2026-01-27T15:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:19:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:19:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 27 15:20:00 compute-0 nova_compute[185191]: 2026-01-27 15:20:00.148 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:20:00.232 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:20:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:20:00.234 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:20:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:20:00.235 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:20:00 compute-0 nova_compute[185191]: 2026-01-27 15:20:00.483 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:01 compute-0 openstack_network_exporter[204239]: ERROR   15:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:20:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:20:01 compute-0 openstack_network_exporter[204239]: ERROR   15:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:20:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:20:05 compute-0 nova_compute[185191]: 2026-01-27 15:20:05.152 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:05 compute-0 nova_compute[185191]: 2026-01-27 15:20:05.485 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:07 compute-0 podman[242434]: 2026-01-27 15:20:07.331055811 +0000 UTC m=+0.076683094 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:20:10 compute-0 nova_compute[185191]: 2026-01-27 15:20:10.155 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:10 compute-0 podman[242454]: 2026-01-27 15:20:10.340691902 +0000 UTC m=+0.077303281 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, release=1755695350, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6, distribution-scope=public, managed_by=edpm_ansible)
Jan 27 15:20:10 compute-0 podman[242452]: 2026-01-27 15:20:10.350850385 +0000 UTC m=+0.099309533 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 27 15:20:10 compute-0 podman[242453]: 2026-01-27 15:20:10.369835626 +0000 UTC m=+0.114158483 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 27 15:20:10 compute-0 nova_compute[185191]: 2026-01-27 15:20:10.487 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:15 compute-0 nova_compute[185191]: 2026-01-27 15:20:15.158 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:15 compute-0 nova_compute[185191]: 2026-01-27 15:20:15.489 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:16 compute-0 nova_compute[185191]: 2026-01-27 15:20:16.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:20:18 compute-0 podman[242518]: 2026-01-27 15:20:18.322432331 +0000 UTC m=+0.071835484 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Jan 27 15:20:20 compute-0 nova_compute[185191]: 2026-01-27 15:20:20.161 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:20 compute-0 nova_compute[185191]: 2026-01-27 15:20:20.491 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:22 compute-0 nova_compute[185191]: 2026-01-27 15:20:22.961 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:20:22 compute-0 nova_compute[185191]: 2026-01-27 15:20:22.962 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 15:20:23 compute-0 podman[242540]: 2026-01-27 15:20:23.311023573 +0000 UTC m=+0.060634913 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:20:23 compute-0 podman[242539]: 2026-01-27 15:20:23.320834057 +0000 UTC m=+0.073447877 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, container_name=kepler, vendor=Red Hat, Inc., version=9.4)
Jan 27 15:20:24 compute-0 nova_compute[185191]: 2026-01-27 15:20:24.952 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:20:25 compute-0 nova_compute[185191]: 2026-01-27 15:20:25.164 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:25 compute-0 nova_compute[185191]: 2026-01-27 15:20:25.493 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:26 compute-0 nova_compute[185191]: 2026-01-27 15:20:26.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:20:26 compute-0 nova_compute[185191]: 2026-01-27 15:20:26.989 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:20:26 compute-0 nova_compute[185191]: 2026-01-27 15:20:26.989 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:20:26 compute-0 nova_compute[185191]: 2026-01-27 15:20:26.990 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:20:26 compute-0 nova_compute[185191]: 2026-01-27 15:20:26.990 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.087 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.170 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.171 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.230 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.231 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.294 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.296 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.357 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.366 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.426 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.426 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.506 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.507 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.572 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.573 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.637 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.644 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.703 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.704 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.769 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.769 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.827 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.828 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.887 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.894 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.956 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:20:27 compute-0 nova_compute[185191]: 2026-01-27 15:20:27.957 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:20:28 compute-0 nova_compute[185191]: 2026-01-27 15:20:28.026 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:20:28 compute-0 nova_compute[185191]: 2026-01-27 15:20:28.027 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:20:28 compute-0 nova_compute[185191]: 2026-01-27 15:20:28.088 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:20:28 compute-0 nova_compute[185191]: 2026-01-27 15:20:28.089 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:20:28 compute-0 nova_compute[185191]: 2026-01-27 15:20:28.148 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:20:28 compute-0 podman[242628]: 2026-01-27 15:20:28.314767053 +0000 UTC m=+0.075621955 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:20:28 compute-0 nova_compute[185191]: 2026-01-27 15:20:28.514 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:20:28 compute-0 nova_compute[185191]: 2026-01-27 15:20:28.515 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4642MB free_disk=72.35688018798828GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:20:28 compute-0 nova_compute[185191]: 2026-01-27 15:20:28.515 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:20:28 compute-0 nova_compute[185191]: 2026-01-27 15:20:28.516 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:20:28 compute-0 nova_compute[185191]: 2026-01-27 15:20:28.877 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:20:28 compute-0 nova_compute[185191]: 2026-01-27 15:20:28.877 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance b98b01bd-8dfe-4188-be2f-ebffe0bd1717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:20:28 compute-0 nova_compute[185191]: 2026-01-27 15:20:28.877 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:20:28 compute-0 nova_compute[185191]: 2026-01-27 15:20:28.877 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance d855a654-d263-4516-8382-efa129798a0d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:20:28 compute-0 nova_compute[185191]: 2026-01-27 15:20:28.877 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:20:28 compute-0 nova_compute[185191]: 2026-01-27 15:20:28.878 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:20:29 compute-0 nova_compute[185191]: 2026-01-27 15:20:29.019 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing inventories for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 15:20:29 compute-0 nova_compute[185191]: 2026-01-27 15:20:29.097 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating ProviderTree inventory for provider dbf037fd-3291-487b-ae9c-69178dae2ebc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 15:20:29 compute-0 nova_compute[185191]: 2026-01-27 15:20:29.098 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 15:20:29 compute-0 nova_compute[185191]: 2026-01-27 15:20:29.117 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing aggregate associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 15:20:29 compute-0 nova_compute[185191]: 2026-01-27 15:20:29.141 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing trait associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, traits: HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_ACCELERATORS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 15:20:29 compute-0 nova_compute[185191]: 2026-01-27 15:20:29.231 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:20:29 compute-0 nova_compute[185191]: 2026-01-27 15:20:29.292 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:20:29 compute-0 nova_compute[185191]: 2026-01-27 15:20:29.294 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:20:29 compute-0 nova_compute[185191]: 2026-01-27 15:20:29.294 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:20:29 compute-0 podman[201073]: time="2026-01-27T15:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:20:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:20:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4385 "" "Go-http-client/1.1"
Jan 27 15:20:30 compute-0 nova_compute[185191]: 2026-01-27 15:20:30.167 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:30 compute-0 nova_compute[185191]: 2026-01-27 15:20:30.295 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:20:30 compute-0 nova_compute[185191]: 2026-01-27 15:20:30.326 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:20:30 compute-0 nova_compute[185191]: 2026-01-27 15:20:30.327 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:20:30 compute-0 nova_compute[185191]: 2026-01-27 15:20:30.327 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:20:30 compute-0 nova_compute[185191]: 2026-01-27 15:20:30.327 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:20:30 compute-0 nova_compute[185191]: 2026-01-27 15:20:30.495 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:30 compute-0 nova_compute[185191]: 2026-01-27 15:20:30.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:20:30 compute-0 nova_compute[185191]: 2026-01-27 15:20:30.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 15:20:30 compute-0 nova_compute[185191]: 2026-01-27 15:20:30.960 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 15:20:31 compute-0 openstack_network_exporter[204239]: ERROR   15:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:20:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:20:31 compute-0 openstack_network_exporter[204239]: ERROR   15:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:20:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:20:31 compute-0 nova_compute[185191]: 2026-01-27 15:20:31.961 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:20:31 compute-0 nova_compute[185191]: 2026-01-27 15:20:31.962 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:20:33 compute-0 nova_compute[185191]: 2026-01-27 15:20:33.139 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-221a9a46-46a7-4a1b-ad5b-5d1eca64c106" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:20:33 compute-0 nova_compute[185191]: 2026-01-27 15:20:33.140 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-221a9a46-46a7-4a1b-ad5b-5d1eca64c106" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:20:33 compute-0 nova_compute[185191]: 2026-01-27 15:20:33.140 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:20:35 compute-0 nova_compute[185191]: 2026-01-27 15:20:35.170 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:35 compute-0 nova_compute[185191]: 2026-01-27 15:20:35.497 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:37 compute-0 nova_compute[185191]: 2026-01-27 15:20:37.405 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Updating instance_info_cache with network_info: [{"id": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "address": "fa:16:3e:ff:42:e6", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0828fa2e-a0", "ovs_interfaceid": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:20:37 compute-0 nova_compute[185191]: 2026-01-27 15:20:37.462 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-221a9a46-46a7-4a1b-ad5b-5d1eca64c106" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:20:37 compute-0 nova_compute[185191]: 2026-01-27 15:20:37.464 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:20:37 compute-0 nova_compute[185191]: 2026-01-27 15:20:37.465 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:20:37 compute-0 nova_compute[185191]: 2026-01-27 15:20:37.465 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:20:37 compute-0 nova_compute[185191]: 2026-01-27 15:20:37.466 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:20:38 compute-0 podman[242653]: 2026-01-27 15:20:38.337213584 +0000 UTC m=+0.087621079 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 15:20:40 compute-0 nova_compute[185191]: 2026-01-27 15:20:40.173 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:40 compute-0 nova_compute[185191]: 2026-01-27 15:20:40.499 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:41 compute-0 podman[242674]: 2026-01-27 15:20:41.349177744 +0000 UTC m=+0.096314502 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, config_id=openstack_network_exporter, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 15:20:41 compute-0 podman[242672]: 2026-01-27 15:20:41.354013764 +0000 UTC m=+0.103102805 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260126)
Jan 27 15:20:41 compute-0 podman[242673]: 2026-01-27 15:20:41.377446795 +0000 UTC m=+0.133735329 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 15:20:45 compute-0 nova_compute[185191]: 2026-01-27 15:20:45.177 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:45 compute-0 nova_compute[185191]: 2026-01-27 15:20:45.501 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:49 compute-0 podman[242736]: 2026-01-27 15:20:49.362632556 +0000 UTC m=+0.118462678 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 15:20:49 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.183 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.503 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.741 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.790 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Triggering sync for uuid 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.795 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Triggering sync for uuid b98b01bd-8dfe-4188-be2f-ebffe0bd1717 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.800 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Triggering sync for uuid 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.801 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Triggering sync for uuid d855a654-d263-4516-8382-efa129798a0d _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.801 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.802 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.803 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.804 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.804 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.805 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.805 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "d855a654-d263-4516-8382-efa129798a0d" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.806 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "d855a654-d263-4516-8382-efa129798a0d" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.867 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.868 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:20:50 compute-0 nova_compute[185191]: 2026-01-27 15:20:50.899 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "d855a654-d263-4516-8382-efa129798a0d" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:20:51 compute-0 nova_compute[185191]: 2026-01-27 15:20:51.215 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:20:54 compute-0 podman[242758]: 2026-01-27 15:20:54.371239849 +0000 UTC m=+0.114053250 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, distribution-scope=public, vendor=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=)
Jan 27 15:20:54 compute-0 podman[242759]: 2026-01-27 15:20:54.373340635 +0000 UTC m=+0.115260732 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:20:55 compute-0 nova_compute[185191]: 2026-01-27 15:20:55.187 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:55 compute-0 nova_compute[185191]: 2026-01-27 15:20:55.513 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:20:59 compute-0 podman[242798]: 2026-01-27 15:20:59.312323103 +0000 UTC m=+0.070371295 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:20:59 compute-0 podman[201073]: time="2026-01-27T15:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:20:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:20:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 27 15:21:00 compute-0 nova_compute[185191]: 2026-01-27 15:21:00.193 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:21:00.234 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:21:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:21:00.235 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:21:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:21:00.236 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:21:00 compute-0 nova_compute[185191]: 2026-01-27 15:21:00.510 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:01 compute-0 openstack_network_exporter[204239]: ERROR   15:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:21:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:21:01 compute-0 openstack_network_exporter[204239]: ERROR   15:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:21:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:21:05 compute-0 nova_compute[185191]: 2026-01-27 15:21:05.197 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:05 compute-0 nova_compute[185191]: 2026-01-27 15:21:05.514 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:09 compute-0 podman[242823]: 2026-01-27 15:21:09.326248318 +0000 UTC m=+0.078740470 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 15:21:10 compute-0 nova_compute[185191]: 2026-01-27 15:21:10.200 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:10 compute-0 nova_compute[185191]: 2026-01-27 15:21:10.516 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.986 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.987 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8f4ac60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.994 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd855a654-d263-4516-8382-efa129798a0d', 'name': 'vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {'metering.server_group': '92e45285-9077-420c-bb23-df5c16dca6b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:21:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:10.997 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '221a9a46-46a7-4a1b-ad5b-5d1eca64c106', 'name': 'vn-etv33tn-yssw4loy7awz-x7cghs3h76zg-vnf-qosycueqkm5v', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {'metering.server_group': '92e45285-9077-420c-bb23-df5c16dca6b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.000 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8c4af6eb-340b-477f-83d2-11aa7ab0b9d3', 'name': 'test_0', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.002 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b98b01bd-8dfe-4188-be2f-ebffe0bd1717', 'name': 'vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {'metering.server_group': '92e45285-9077-420c-bb23-df5c16dca6b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.003 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.003 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.003 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.003 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.004 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:21:11.003358) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.069 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 4376430048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.070 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 12457092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.070 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.138 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.latency volume: 2049506219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.138 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.latency volume: 12506227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.139 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.215 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 3771884583 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.216 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 11291751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.216 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.311 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.latency volume: 1734519481 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.311 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.latency volume: 11551078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.312 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.312 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.313 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.313 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.313 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.313 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.313 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.313 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.313 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.314 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.315 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.315 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:21:11.313234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.315 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.315 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.316 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.316 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.316 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.requests volume: 243 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.316 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.317 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.317 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.318 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.318 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.318 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.318 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.318 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:21:11.318448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.346 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.347 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.348 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.373 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.374 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.374 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.407 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.407 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.408 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.436 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.436 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.437 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.437 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.438 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.438 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.438 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.438 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.439 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.439 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:21:11.439145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.444 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.447 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.450 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.454 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.454 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.455 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.455 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.455 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.456 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.456 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:21:11.455970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.457 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.458 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.458 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.458 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.459 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:21:11.458844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.460 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.460 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.460 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.460 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.460 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.461 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.461 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:21:11.461182) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.462 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.462 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.462 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.463 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.463 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.463 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.464 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.464 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.464 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.464 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:21:11.464429) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.485 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/cpu volume: 33480000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.503 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/cpu volume: 34560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.523 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/cpu volume: 41570000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.546 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/cpu volume: 348370000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.546 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.547 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.547 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.548 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.548 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.548 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.549 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.549 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:21:11.548275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.549 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.550 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.550 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.551 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.551 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.551 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.551 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.552 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.552 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:21:11.551822) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.552 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.553 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.553 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.packets volume: 68 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.554 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.554 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.554 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.555 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.555 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.555 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:21:11.555280) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.555 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.556 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.556 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/memory.usage volume: 48.9765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.557 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/memory.usage volume: 49.00390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.558 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.558 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.558 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.558 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.559 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.559 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:21:11.558791) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.559 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.560 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.560 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.561 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.561 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.562 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.562 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.562 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.563 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:21:11.563212) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.563 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.564 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.564 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.565 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.566 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.567 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:21:11.567346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.568 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.bytes.delta volume: 1396 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.568 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.568 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.569 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.570 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.570 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:21:11.571397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.572 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.572 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.573 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.573 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.bytes volume: 7704 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.574 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:21:11.575490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.576 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.576 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.577 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.577 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.577 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.578 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.578 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.579 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.579 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.580 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.580 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.580 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.582 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:21:11.583160) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.583 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.584 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.584 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.585 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.586 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:21:11.587292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.588 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.588 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.589 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.589 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.590 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.590 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.590 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.591 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.591 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.592 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.592 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.592 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.594 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.594 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.595 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.595 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:21:11.595795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.596 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.bytes.delta volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.597 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.597 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.597 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.598 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.599 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.599 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.599 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.600 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:21:11.599895) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.600 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 2012849032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.601 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 99931447 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.601 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 145016237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.602 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.latency volume: 838942513 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.602 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.latency volume: 127847454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.602 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.latency volume: 233079678 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.603 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 1242591197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.603 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 114890665 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.604 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 113913681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.604 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.latency volume: 697193681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.605 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.latency volume: 100159582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.605 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.latency volume: 227319301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.606 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.606 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.607 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.607 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.607 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.608 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.608 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:21:11.608459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.609 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.609 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.610 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.610 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.610 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.611 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.611 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.612 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.612 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.613 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.613 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.613 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.614 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.615 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.615 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.615 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.615 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.616 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.616 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:21:11.616133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.616 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.617 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.617 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.618 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.618 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.619 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.619 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.619 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.620 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.620 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.621 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.621 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.622 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.623 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.623 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.624 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:21:11.623715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.624 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.624 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.625 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.625 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.626 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.626 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.627 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.627 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.627 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.628 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:21:11.627760) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.628 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.628 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.629 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.629 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.630 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.630 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.630 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.631 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.631 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.632 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.632 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.633 14 DEBUG ceilometer.compute.pollsters [-] b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.634 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.634 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:21:11.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:21:12 compute-0 podman[242844]: 2026-01-27 15:21:12.337401959 +0000 UTC m=+0.086939790 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 27 15:21:12 compute-0 podman[242846]: 2026-01-27 15:21:12.342729232 +0000 UTC m=+0.084353800 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Jan 27 15:21:12 compute-0 podman[242845]: 2026-01-27 15:21:12.425234612 +0000 UTC m=+0.169158892 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:21:15 compute-0 nova_compute[185191]: 2026-01-27 15:21:15.202 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:15 compute-0 nova_compute[185191]: 2026-01-27 15:21:15.518 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:20 compute-0 nova_compute[185191]: 2026-01-27 15:21:20.205 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:20 compute-0 podman[242906]: 2026-01-27 15:21:20.328273304 +0000 UTC m=+0.082222303 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 15:21:20 compute-0 nova_compute[185191]: 2026-01-27 15:21:20.520 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:25 compute-0 nova_compute[185191]: 2026-01-27 15:21:25.207 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:25 compute-0 podman[242926]: 2026-01-27 15:21:25.333240117 +0000 UTC m=+0.078928014 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, com.redhat.component=ubi9-container, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=)
Jan 27 15:21:25 compute-0 podman[242927]: 2026-01-27 15:21:25.353064781 +0000 UTC m=+0.094889244 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:21:25 compute-0 nova_compute[185191]: 2026-01-27 15:21:25.522 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:26 compute-0 nova_compute[185191]: 2026-01-27 15:21:26.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:21:26 compute-0 nova_compute[185191]: 2026-01-27 15:21:26.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:21:26 compute-0 nova_compute[185191]: 2026-01-27 15:21:26.975 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:21:26 compute-0 nova_compute[185191]: 2026-01-27 15:21:26.975 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:21:26 compute-0 nova_compute[185191]: 2026-01-27 15:21:26.976 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:21:26 compute-0 nova_compute[185191]: 2026-01-27 15:21:26.976 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.079 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.142 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.143 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.205 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.206 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.271 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.272 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.334 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.341 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.412 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.413 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.477 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.478 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.541 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.542 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.605 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.617 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.684 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.685 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.750 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.752 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.812 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.813 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.878 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.885 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.942 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:21:27 compute-0 nova_compute[185191]: 2026-01-27 15:21:27.943 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.005 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.007 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.081 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.082 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.165 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.570 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.571 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4599MB free_disk=72.35492706298828GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.572 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.572 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.767 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.768 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance b98b01bd-8dfe-4188-be2f-ebffe0bd1717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.768 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.768 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance d855a654-d263-4516-8382-efa129798a0d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.769 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.769 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.859 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.903 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.905 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:21:28 compute-0 nova_compute[185191]: 2026-01-27 15:21:28.906 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:21:29 compute-0 podman[201073]: time="2026-01-27T15:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:21:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:21:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4389 "" "Go-http-client/1.1"
Jan 27 15:21:29 compute-0 nova_compute[185191]: 2026-01-27 15:21:29.906 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:21:29 compute-0 nova_compute[185191]: 2026-01-27 15:21:29.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:21:29 compute-0 nova_compute[185191]: 2026-01-27 15:21:29.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:21:30 compute-0 nova_compute[185191]: 2026-01-27 15:21:30.210 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:30 compute-0 podman[243018]: 2026-01-27 15:21:30.342786914 +0000 UTC m=+0.097327419 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:21:30 compute-0 nova_compute[185191]: 2026-01-27 15:21:30.524 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:30 compute-0 nova_compute[185191]: 2026-01-27 15:21:30.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:21:31 compute-0 openstack_network_exporter[204239]: ERROR   15:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:21:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:21:31 compute-0 openstack_network_exporter[204239]: ERROR   15:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:21:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:21:32 compute-0 nova_compute[185191]: 2026-01-27 15:21:32.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:21:32 compute-0 nova_compute[185191]: 2026-01-27 15:21:32.943 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:21:32 compute-0 nova_compute[185191]: 2026-01-27 15:21:32.943 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:21:34 compute-0 nova_compute[185191]: 2026-01-27 15:21:34.160 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:21:34 compute-0 nova_compute[185191]: 2026-01-27 15:21:34.161 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:21:34 compute-0 nova_compute[185191]: 2026-01-27 15:21:34.161 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:21:34 compute-0 nova_compute[185191]: 2026-01-27 15:21:34.162 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:21:35 compute-0 nova_compute[185191]: 2026-01-27 15:21:35.217 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:35 compute-0 nova_compute[185191]: 2026-01-27 15:21:35.528 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:36 compute-0 nova_compute[185191]: 2026-01-27 15:21:36.460 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updating instance_info_cache with network_info: [{"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:21:36 compute-0 nova_compute[185191]: 2026-01-27 15:21:36.486 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:21:36 compute-0 nova_compute[185191]: 2026-01-27 15:21:36.487 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:21:36 compute-0 nova_compute[185191]: 2026-01-27 15:21:36.488 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:21:36 compute-0 nova_compute[185191]: 2026-01-27 15:21:36.488 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:21:36 compute-0 nova_compute[185191]: 2026-01-27 15:21:36.488 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:21:40 compute-0 nova_compute[185191]: 2026-01-27 15:21:40.219 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:40 compute-0 podman[243044]: 2026-01-27 15:21:40.310357181 +0000 UTC m=+0.070104537 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:21:40 compute-0 nova_compute[185191]: 2026-01-27 15:21:40.529 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:40 compute-0 sshd-session[243042]: Invalid user ethereum from 2.57.122.238 port 46654
Jan 27 15:21:40 compute-0 sshd-session[243042]: Connection closed by invalid user ethereum 2.57.122.238 port 46654 [preauth]
Jan 27 15:21:43 compute-0 podman[243063]: 2026-01-27 15:21:43.320806814 +0000 UTC m=+0.079292324 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 27 15:21:43 compute-0 podman[243065]: 2026-01-27 15:21:43.325280514 +0000 UTC m=+0.078399040 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container)
Jan 27 15:21:43 compute-0 podman[243064]: 2026-01-27 15:21:43.353136214 +0000 UTC m=+0.107813922 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 15:21:45 compute-0 nova_compute[185191]: 2026-01-27 15:21:45.221 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:45 compute-0 nova_compute[185191]: 2026-01-27 15:21:45.532 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:50 compute-0 nova_compute[185191]: 2026-01-27 15:21:50.224 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:50 compute-0 nova_compute[185191]: 2026-01-27 15:21:50.534 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:51 compute-0 podman[243123]: 2026-01-27 15:21:51.313568772 +0000 UTC m=+0.073264612 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 15:21:55 compute-0 nova_compute[185191]: 2026-01-27 15:21:55.227 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:55 compute-0 nova_compute[185191]: 2026-01-27 15:21:55.537 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:21:56 compute-0 podman[243143]: 2026-01-27 15:21:56.341069944 +0000 UTC m=+0.096088546 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_id=kepler, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Jan 27 15:21:56 compute-0 podman[243144]: 2026-01-27 15:21:56.353500568 +0000 UTC m=+0.106012713 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:21:59 compute-0 podman[201073]: time="2026-01-27T15:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:21:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:21:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4388 "" "Go-http-client/1.1"
Jan 27 15:22:00 compute-0 nova_compute[185191]: 2026-01-27 15:22:00.229 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:00.235 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:22:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:00.236 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:22:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:00.237 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:22:00 compute-0 nova_compute[185191]: 2026-01-27 15:22:00.539 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:01 compute-0 podman[243186]: 2026-01-27 15:22:01.351437242 +0000 UTC m=+0.089901310 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:22:01 compute-0 anacron[4406]: Job `cron.monthly' started
Jan 27 15:22:01 compute-0 openstack_network_exporter[204239]: ERROR   15:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:22:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:22:01 compute-0 openstack_network_exporter[204239]: ERROR   15:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:22:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:22:01 compute-0 anacron[4406]: Job `cron.monthly' terminated
Jan 27 15:22:01 compute-0 anacron[4406]: Normal exit (3 jobs run)
Jan 27 15:22:05 compute-0 nova_compute[185191]: 2026-01-27 15:22:05.233 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:05 compute-0 nova_compute[185191]: 2026-01-27 15:22:05.541 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:10 compute-0 nova_compute[185191]: 2026-01-27 15:22:10.237 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:10 compute-0 nova_compute[185191]: 2026-01-27 15:22:10.543 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:11 compute-0 podman[243211]: 2026-01-27 15:22:11.365706166 +0000 UTC m=+0.106758564 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 15:22:14 compute-0 podman[243232]: 2026-01-27 15:22:14.354140687 +0000 UTC m=+0.097019881 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, config_id=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Jan 27 15:22:14 compute-0 podman[243230]: 2026-01-27 15:22:14.383175378 +0000 UTC m=+0.132267639 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, managed_by=edpm_ansible, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 15:22:14 compute-0 podman[243231]: 2026-01-27 15:22:14.406046034 +0000 UTC m=+0.154665973 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 15:22:15 compute-0 nova_compute[185191]: 2026-01-27 15:22:15.240 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:15 compute-0 nova_compute[185191]: 2026-01-27 15:22:15.547 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:20 compute-0 nova_compute[185191]: 2026-01-27 15:22:20.244 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:20 compute-0 nova_compute[185191]: 2026-01-27 15:22:20.549 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:22 compute-0 podman[243294]: 2026-01-27 15:22:22.313848566 +0000 UTC m=+0.060263532 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 15:22:25 compute-0 nova_compute[185191]: 2026-01-27 15:22:25.248 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:25 compute-0 nova_compute[185191]: 2026-01-27 15:22:25.551 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.387 185195 DEBUG oslo_concurrency.lockutils [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.388 185195 DEBUG oslo_concurrency.lockutils [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.388 185195 DEBUG oslo_concurrency.lockutils [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.388 185195 DEBUG oslo_concurrency.lockutils [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.388 185195 DEBUG oslo_concurrency.lockutils [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.390 185195 INFO nova.compute.manager [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Terminating instance
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.391 185195 DEBUG nova.compute.manager [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 15:22:27 compute-0 podman[243315]: 2026-01-27 15:22:27.447831159 +0000 UTC m=+0.073387846 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:22:27 compute-0 kernel: tap62a0d85c-d2 (unregistering): left promiscuous mode
Jan 27 15:22:27 compute-0 NetworkManager[56090]: <info>  [1769527347.4605] device (tap62a0d85c-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.472 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:27 compute-0 ovn_controller[97541]: 2026-01-27T15:22:27Z|00050|binding|INFO|Releasing lport 62a0d85c-d24f-4ada-af0a-2b902803778f from this chassis (sb_readonly=0)
Jan 27 15:22:27 compute-0 ovn_controller[97541]: 2026-01-27T15:22:27Z|00051|binding|INFO|Setting lport 62a0d85c-d24f-4ada-af0a-2b902803778f down in Southbound
Jan 27 15:22:27 compute-0 ovn_controller[97541]: 2026-01-27T15:22:27Z|00052|binding|INFO|Removing iface tap62a0d85c-d2 ovn-installed in OVS
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.475 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:27 compute-0 podman[243314]: 2026-01-27 15:22:27.484981828 +0000 UTC m=+0.113424112 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, build-date=2024-09-18T21:23:30, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public)
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.489 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:27.503 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:86:b3 192.168.0.246'], port_security=['fa:16:3e:f3:86:b3 192.168.0.246'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-smi76etv33tn-xakll53pfa3s-f35zx7ec3yvf-port-arkyrtjcq7v6', 'neutron:cidrs': '192.168.0.246/24', 'neutron:device_id': 'b98b01bd-8dfe-4188-be2f-ebffe0bd1717', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7e37fe5-6354-4f61-95d0-78632be96811', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-smi76etv33tn-xakll53pfa3s-f35zx7ec3yvf-port-arkyrtjcq7v6', 'neutron:project_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'neutron:revision_number': '4', 'neutron:security_group_ids': '812ec3a5-800e-4a9a-a5c1-7429aedf7716', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=764c6ac9-6147-480d-b23c-048fbe883747, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=62a0d85c-d24f-4ada-af0a-2b902803778f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:22:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:27.504 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 62a0d85c-d24f-4ada-af0a-2b902803778f in datapath d7e37fe5-6354-4f61-95d0-78632be96811 unbound from our chassis
Jan 27 15:22:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:27.505 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7e37fe5-6354-4f61-95d0-78632be96811
Jan 27 15:22:27 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 27 15:22:27 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 6min 54.386s CPU time.
Jan 27 15:22:27 compute-0 systemd-machined[156506]: Machine qemu-2-instance-00000002 terminated.
Jan 27 15:22:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:27.523 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[74df6a31-ff5a-4c3e-9b9a-79d7f504a262]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:22:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:27.558 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[762296a8-72f5-48fd-99da-e7cd98bc92e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:22:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:27.561 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[21c1c0f3-cde0-49aa-b7df-2b404ff85cfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:22:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:27.589 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[131ecb3f-e6c7-46be-a5da-71c882bb89c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:22:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:27.607 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a45a6937-2777-4a28-911a-78608f3281df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7e37fe5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:72:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 420463, 'reachable_time': 23578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243370, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.618 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.625 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:27.629 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6327d5-b136-40db-a172-be7f62d65f18]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapd7e37fe5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420478, 'tstamp': 420478}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243373, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7e37fe5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420481, 'tstamp': 420481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243373, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:22:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:27.631 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7e37fe5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.633 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.639 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:27.640 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7e37fe5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:22:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:27.640 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:22:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:27.641 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7e37fe5-60, col_values=(('external_ids', {'iface-id': 'd4262905-2cdc-4929-a155-db8204d90ca2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:22:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:27.641 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.675 185195 INFO nova.virt.libvirt.driver [-] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Instance destroyed successfully.
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.676 185195 DEBUG nova.objects.instance [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lazy-loading 'resources' on Instance uuid b98b01bd-8dfe-4188-be2f-ebffe0bd1717 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.731 185195 DEBUG nova.virt.libvirt.vif [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:11:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-etv33tn-xakll53pfa3s-f35zx7ec3yvf-vnf-4ob6teez7al4',id=2,image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:11:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='92e45285-9077-420c-bb23-df5c16dca6b3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd88ca4062da4fb9bedb3a0002a43c12',ramdisk_id='',reservation_id='r-d0vhof01',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T15:11:52Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zNDMzMjk0Mzg5OTc3Nzc1Njk0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM0MzMyOTQzODk5Nzc3NzU2OTQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzQzMzI5NDM4OTk3Nzc3NTY5ND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM0MzMyOTQzODk5Nzc3NzU2OTQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zNDMzMjk0Mzg5OTc3Nzc1Njk0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zNDMzMjk0Mzg5OTc3Nzc1Njk0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 27 15:22:27 compute-0 nova_compute[185191]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzQzMzI5NDM4OTk3Nzc3NTY5ND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM0MzMyOTQzODk5Nzc3NzU2OTQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zNDMzMjk0Mzg5OTc3Nzc1Njk0PT0tLQo=',user_id='24260fb24da44b10b598f9c822c026b8',uuid=b98b01bd-8dfe-4188-be2f-ebffe0bd1717,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62a0d85c-d24f-4ada-af0a-2b902803778f", "address": "fa:16:3e:f3:86:b3", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.246", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a0d85c-d2", "ovs_interfaceid": "62a0d85c-d24f-4ada-af0a-2b902803778f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.731 185195 DEBUG nova.network.os_vif_util [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converting VIF {"id": "62a0d85c-d24f-4ada-af0a-2b902803778f", "address": "fa:16:3e:f3:86:b3", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.246", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a0d85c-d2", "ovs_interfaceid": "62a0d85c-d24f-4ada-af0a-2b902803778f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.732 185195 DEBUG nova.network.os_vif_util [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:86:b3,bridge_name='br-int',has_traffic_filtering=True,id=62a0d85c-d24f-4ada-af0a-2b902803778f,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap62a0d85c-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.732 185195 DEBUG os_vif [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:86:b3,bridge_name='br-int',has_traffic_filtering=True,id=62a0d85c-d24f-4ada-af0a-2b902803778f,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap62a0d85c-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.734 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.734 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62a0d85c-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.736 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.737 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.740 185195 INFO os_vif [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:86:b3,bridge_name='br-int',has_traffic_filtering=True,id=62a0d85c-d24f-4ada-af0a-2b902803778f,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap62a0d85c-d2')
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.740 185195 INFO nova.virt.libvirt.driver [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Deleting instance files /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717_del
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.741 185195 INFO nova.virt.libvirt.driver [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Deletion of /var/lib/nova/instances/b98b01bd-8dfe-4188-be2f-ebffe0bd1717_del complete
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.888 185195 DEBUG nova.virt.libvirt.host [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.889 185195 INFO nova.virt.libvirt.host [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] UEFI support detected
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.891 185195 INFO nova.compute.manager [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Took 0.50 seconds to destroy the instance on the hypervisor.
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.892 185195 DEBUG oslo.service.loopingcall [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.892 185195 DEBUG nova.compute.manager [-] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.892 185195 DEBUG nova.network.neutron [-] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.899 185195 DEBUG nova.compute.manager [req-ac31e7ec-6663-4f74-8bed-12f7fbd2b93a req-6f3bb40e-1714-4e1a-a83c-e7aa685af449 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Received event network-vif-unplugged-62a0d85c-d24f-4ada-af0a-2b902803778f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.899 185195 DEBUG oslo_concurrency.lockutils [req-ac31e7ec-6663-4f74-8bed-12f7fbd2b93a req-6f3bb40e-1714-4e1a-a83c-e7aa685af449 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.899 185195 DEBUG oslo_concurrency.lockutils [req-ac31e7ec-6663-4f74-8bed-12f7fbd2b93a req-6f3bb40e-1714-4e1a-a83c-e7aa685af449 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.899 185195 DEBUG oslo_concurrency.lockutils [req-ac31e7ec-6663-4f74-8bed-12f7fbd2b93a req-6f3bb40e-1714-4e1a-a83c-e7aa685af449 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.900 185195 DEBUG nova.compute.manager [req-ac31e7ec-6663-4f74-8bed-12f7fbd2b93a req-6f3bb40e-1714-4e1a-a83c-e7aa685af449 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] No waiting events found dispatching network-vif-unplugged-62a0d85c-d24f-4ada-af0a-2b902803778f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.900 185195 DEBUG nova.compute.manager [req-ac31e7ec-6663-4f74-8bed-12f7fbd2b93a req-6f3bb40e-1714-4e1a-a83c-e7aa685af449 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Received event network-vif-unplugged-62a0d85c-d24f-4ada-af0a-2b902803778f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:22:27 compute-0 rsyslogd[235702]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 15:22:27.731 185195 DEBUG nova.virt.libvirt.vif [None req-6a87766b-91 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.977 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.977 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.977 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:22:27 compute-0 nova_compute[185191]: 2026-01-27 15:22:27.978 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.126 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.189 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.191 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.251 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.252 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.312 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.313 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.377 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.386 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.449 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.450 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.532 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.534 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.596 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.597 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.656 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.663 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.723 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.725 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.790 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.792 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:22:28 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:28.798 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:22:28 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:28.800 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.812 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.860 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.862 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:22:28 compute-0 nova_compute[185191]: 2026-01-27 15:22:28.927 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:22:29 compute-0 nova_compute[185191]: 2026-01-27 15:22:29.297 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:22:29 compute-0 nova_compute[185191]: 2026-01-27 15:22:29.299 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4765MB free_disk=72.37748718261719GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:22:29 compute-0 nova_compute[185191]: 2026-01-27 15:22:29.300 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:22:29 compute-0 nova_compute[185191]: 2026-01-27 15:22:29.300 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:22:29 compute-0 nova_compute[185191]: 2026-01-27 15:22:29.459 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:22:29 compute-0 nova_compute[185191]: 2026-01-27 15:22:29.459 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance b98b01bd-8dfe-4188-be2f-ebffe0bd1717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:22:29 compute-0 nova_compute[185191]: 2026-01-27 15:22:29.460 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:22:29 compute-0 nova_compute[185191]: 2026-01-27 15:22:29.460 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance d855a654-d263-4516-8382-efa129798a0d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:22:29 compute-0 nova_compute[185191]: 2026-01-27 15:22:29.460 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:22:29 compute-0 nova_compute[185191]: 2026-01-27 15:22:29.460 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:22:29 compute-0 nova_compute[185191]: 2026-01-27 15:22:29.540 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:22:29 compute-0 nova_compute[185191]: 2026-01-27 15:22:29.568 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:22:29 compute-0 nova_compute[185191]: 2026-01-27 15:22:29.730 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:22:29 compute-0 nova_compute[185191]: 2026-01-27 15:22:29.731 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:22:29 compute-0 podman[201073]: time="2026-01-27T15:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:22:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:22:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.123 185195 DEBUG nova.compute.manager [req-a41fea7a-3e24-44e3-a9fd-498a880c4952 req-ef7f3409-59ab-494c-99bc-c0c0e4c08b99 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Received event network-changed-62a0d85c-d24f-4ada-af0a-2b902803778f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.123 185195 DEBUG nova.compute.manager [req-a41fea7a-3e24-44e3-a9fd-498a880c4952 req-ef7f3409-59ab-494c-99bc-c0c0e4c08b99 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Refreshing instance network info cache due to event network-changed-62a0d85c-d24f-4ada-af0a-2b902803778f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.123 185195 DEBUG oslo_concurrency.lockutils [req-a41fea7a-3e24-44e3-a9fd-498a880c4952 req-ef7f3409-59ab-494c-99bc-c0c0e4c08b99 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.124 185195 DEBUG oslo_concurrency.lockutils [req-a41fea7a-3e24-44e3-a9fd-498a880c4952 req-ef7f3409-59ab-494c-99bc-c0c0e4c08b99 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.124 185195 DEBUG nova.network.neutron [req-a41fea7a-3e24-44e3-a9fd-498a880c4952 req-ef7f3409-59ab-494c-99bc-c0c0e4c08b99 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Refreshing network info cache for port 62a0d85c-d24f-4ada-af0a-2b902803778f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.456 185195 DEBUG nova.network.neutron [-] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.546 185195 INFO nova.network.neutron [req-a41fea7a-3e24-44e3-a9fd-498a880c4952 req-ef7f3409-59ab-494c-99bc-c0c0e4c08b99 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Port 62a0d85c-d24f-4ada-af0a-2b902803778f from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.546 185195 DEBUG nova.network.neutron [req-a41fea7a-3e24-44e3-a9fd-498a880c4952 req-ef7f3409-59ab-494c-99bc-c0c0e4c08b99 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.553 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.572 185195 INFO nova.compute.manager [-] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Took 2.68 seconds to deallocate network for instance.
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.591 185195 DEBUG oslo_concurrency.lockutils [req-a41fea7a-3e24-44e3-a9fd-498a880c4952 req-ef7f3409-59ab-494c-99bc-c0c0e4c08b99 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-b98b01bd-8dfe-4188-be2f-ebffe0bd1717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.591 185195 DEBUG nova.compute.manager [req-a41fea7a-3e24-44e3-a9fd-498a880c4952 req-ef7f3409-59ab-494c-99bc-c0c0e4c08b99 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Received event network-vif-plugged-62a0d85c-d24f-4ada-af0a-2b902803778f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.592 185195 DEBUG oslo_concurrency.lockutils [req-a41fea7a-3e24-44e3-a9fd-498a880c4952 req-ef7f3409-59ab-494c-99bc-c0c0e4c08b99 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.592 185195 DEBUG oslo_concurrency.lockutils [req-a41fea7a-3e24-44e3-a9fd-498a880c4952 req-ef7f3409-59ab-494c-99bc-c0c0e4c08b99 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.592 185195 DEBUG oslo_concurrency.lockutils [req-a41fea7a-3e24-44e3-a9fd-498a880c4952 req-ef7f3409-59ab-494c-99bc-c0c0e4c08b99 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.592 185195 DEBUG nova.compute.manager [req-a41fea7a-3e24-44e3-a9fd-498a880c4952 req-ef7f3409-59ab-494c-99bc-c0c0e4c08b99 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] No waiting events found dispatching network-vif-plugged-62a0d85c-d24f-4ada-af0a-2b902803778f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.593 185195 WARNING nova.compute.manager [req-a41fea7a-3e24-44e3-a9fd-498a880c4952 req-ef7f3409-59ab-494c-99bc-c0c0e4c08b99 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Received unexpected event network-vif-plugged-62a0d85c-d24f-4ada-af0a-2b902803778f for instance with vm_state active and task_state deleting.
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.732 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.732 185195 DEBUG oslo_concurrency.lockutils [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.733 185195 DEBUG oslo_concurrency.lockutils [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.733 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.733 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:22:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:22:30.802 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.854 185195 DEBUG nova.compute.provider_tree [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:22:30 compute-0 nova_compute[185191]: 2026-01-27 15:22:30.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:22:31 compute-0 openstack_network_exporter[204239]: ERROR   15:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:22:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:22:31 compute-0 openstack_network_exporter[204239]: ERROR   15:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:22:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:22:31 compute-0 nova_compute[185191]: 2026-01-27 15:22:31.442 185195 DEBUG nova.scheduler.client.report [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:22:31 compute-0 nova_compute[185191]: 2026-01-27 15:22:31.560 185195 DEBUG oslo_concurrency.lockutils [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:22:31 compute-0 nova_compute[185191]: 2026-01-27 15:22:31.711 185195 INFO nova.scheduler.client.report [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Deleted allocations for instance b98b01bd-8dfe-4188-be2f-ebffe0bd1717
Jan 27 15:22:31 compute-0 nova_compute[185191]: 2026-01-27 15:22:31.890 185195 DEBUG oslo_concurrency.lockutils [None req-6a87766b-9198-4ae6-88ca-519dede39a8b 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "b98b01bd-8dfe-4188-be2f-ebffe0bd1717" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:22:31 compute-0 nova_compute[185191]: 2026-01-27 15:22:31.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:22:32 compute-0 podman[243431]: 2026-01-27 15:22:32.322200205 +0000 UTC m=+0.074345681 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:22:32 compute-0 nova_compute[185191]: 2026-01-27 15:22:32.737 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:32 compute-0 nova_compute[185191]: 2026-01-27 15:22:32.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:22:32 compute-0 nova_compute[185191]: 2026-01-27 15:22:32.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:22:33 compute-0 nova_compute[185191]: 2026-01-27 15:22:33.231 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-221a9a46-46a7-4a1b-ad5b-5d1eca64c106" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:22:33 compute-0 nova_compute[185191]: 2026-01-27 15:22:33.231 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-221a9a46-46a7-4a1b-ad5b-5d1eca64c106" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:22:33 compute-0 nova_compute[185191]: 2026-01-27 15:22:33.231 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:22:35 compute-0 nova_compute[185191]: 2026-01-27 15:22:35.367 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Updating instance_info_cache with network_info: [{"id": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "address": "fa:16:3e:ff:42:e6", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0828fa2e-a0", "ovs_interfaceid": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:22:35 compute-0 nova_compute[185191]: 2026-01-27 15:22:35.407 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-221a9a46-46a7-4a1b-ad5b-5d1eca64c106" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:22:35 compute-0 nova_compute[185191]: 2026-01-27 15:22:35.408 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:22:35 compute-0 nova_compute[185191]: 2026-01-27 15:22:35.408 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:22:35 compute-0 nova_compute[185191]: 2026-01-27 15:22:35.555 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:36 compute-0 nova_compute[185191]: 2026-01-27 15:22:36.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:22:37 compute-0 nova_compute[185191]: 2026-01-27 15:22:37.738 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:37 compute-0 nova_compute[185191]: 2026-01-27 15:22:37.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:22:40 compute-0 nova_compute[185191]: 2026-01-27 15:22:40.558 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:42 compute-0 podman[243456]: 2026-01-27 15:22:42.340933921 +0000 UTC m=+0.090396323 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Jan 27 15:22:42 compute-0 nova_compute[185191]: 2026-01-27 15:22:42.672 185195 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769527347.6712184, b98b01bd-8dfe-4188-be2f-ebffe0bd1717 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:22:42 compute-0 nova_compute[185191]: 2026-01-27 15:22:42.673 185195 INFO nova.compute.manager [-] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] VM Stopped (Lifecycle Event)
Jan 27 15:22:42 compute-0 nova_compute[185191]: 2026-01-27 15:22:42.698 185195 DEBUG nova.compute.manager [None req-a00a3f1e-896d-40d0-9b0e-536897c90b94 - - - - - -] [instance: b98b01bd-8dfe-4188-be2f-ebffe0bd1717] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:22:42 compute-0 nova_compute[185191]: 2026-01-27 15:22:42.741 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:44 compute-0 podman[243477]: 2026-01-27 15:22:44.771593127 +0000 UTC m=+0.085066830 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Jan 27 15:22:44 compute-0 podman[243476]: 2026-01-27 15:22:44.799542329 +0000 UTC m=+0.116637509 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:22:44 compute-0 podman[243475]: 2026-01-27 15:22:44.800355761 +0000 UTC m=+0.115440707 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 15:22:45 compute-0 nova_compute[185191]: 2026-01-27 15:22:45.561 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:47 compute-0 nova_compute[185191]: 2026-01-27 15:22:47.743 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:50 compute-0 nova_compute[185191]: 2026-01-27 15:22:50.562 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:52 compute-0 nova_compute[185191]: 2026-01-27 15:22:52.745 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:53 compute-0 podman[243540]: 2026-01-27 15:22:53.319389167 +0000 UTC m=+0.072356197 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 15:22:55 compute-0 nova_compute[185191]: 2026-01-27 15:22:55.565 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:57 compute-0 nova_compute[185191]: 2026-01-27 15:22:57.747 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:22:58 compute-0 podman[243561]: 2026-01-27 15:22:58.321778561 +0000 UTC m=+0.068871233 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:22:58 compute-0 podman[243560]: 2026-01-27 15:22:58.357928624 +0000 UTC m=+0.105432328 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, config_id=kepler, maintainer=Red Hat, Inc., distribution-scope=public, release=1214.1726694543, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Jan 27 15:22:59 compute-0 podman[201073]: time="2026-01-27T15:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:22:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:22:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4385 "" "Go-http-client/1.1"
Jan 27 15:23:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:23:00.237 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:23:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:23:00.237 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:23:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:23:00.238 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:23:00 compute-0 nova_compute[185191]: 2026-01-27 15:23:00.567 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:01 compute-0 openstack_network_exporter[204239]: ERROR   15:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:23:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:23:01 compute-0 openstack_network_exporter[204239]: ERROR   15:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:23:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:23:02 compute-0 nova_compute[185191]: 2026-01-27 15:23:02.750 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:03 compute-0 ovn_controller[97541]: 2026-01-27T15:23:03Z|00053|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 27 15:23:03 compute-0 podman[243601]: 2026-01-27 15:23:03.306014343 +0000 UTC m=+0.060791906 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:23:05 compute-0 nova_compute[185191]: 2026-01-27 15:23:05.570 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:07 compute-0 sshd-session[243626]: banner exchange: Connection from 91.238.181.96 port 65191: invalid format
Jan 27 15:23:07 compute-0 nova_compute[185191]: 2026-01-27 15:23:07.753 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:10 compute-0 nova_compute[185191]: 2026-01-27 15:23:10.573 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.987 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.989 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac0b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:23:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:10.999 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd855a654-d263-4516-8382-efa129798a0d', 'name': 'vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {'metering.server_group': '92e45285-9077-420c-bb23-df5c16dca6b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.002 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '221a9a46-46a7-4a1b-ad5b-5d1eca64c106', 'name': 'vn-etv33tn-yssw4loy7awz-x7cghs3h76zg-vnf-qosycueqkm5v', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {'metering.server_group': '92e45285-9077-420c-bb23-df5c16dca6b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.006 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8c4af6eb-340b-477f-83d2-11aa7ab0b9d3', 'name': 'test_0', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.006 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.006 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.008 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:23:11.007056) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.101 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 4376430048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.102 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 12457092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.102 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.175 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.latency volume: 2049506219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.175 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.latency volume: 12506227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.175 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.240 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 3771884583 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.241 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 11291751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.241 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.242 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.242 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.242 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.242 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.242 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.242 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.243 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:23:11.242719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.243 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.243 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.244 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.244 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.244 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.244 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.245 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.245 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.245 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.245 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.246 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.246 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.246 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.246 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:23:11.246237) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.272 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.272 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.273 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.293 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.293 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.294 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.314 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.314 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.315 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.316 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.316 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.316 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:23:11.316382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.320 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.323 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.327 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.328 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.328 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.328 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.329 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.329 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:23:11.329106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.330 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.330 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.330 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.330 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:23:11.330729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.331 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:23:11.332081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.332 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.332 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.332 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.333 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.333 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.333 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.333 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:23:11.333633) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.357 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/cpu volume: 34730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.380 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/cpu volume: 35790000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.400 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/cpu volume: 42780000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.401 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.401 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.401 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.401 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.401 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.402 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.402 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.402 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.402 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:23:11.401968) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.403 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.403 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.403 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.403 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.403 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.403 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.404 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:23:11.403876) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.404 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.404 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.405 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.405 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.405 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.406 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.406 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:23:11.406043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.406 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.406 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/memory.usage volume: 48.890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.407 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.407 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:23:11.408331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.408 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.409 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.409 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.410 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.410 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.411 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:23:11.410705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.411 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.411 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.411 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.412 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:23:11.412706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.413 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.413 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.413 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.414 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.414 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.414 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.414 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.414 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.414 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.415 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:23:11.414808) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.415 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.415 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.416 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.416 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.416 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.416 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.416 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.416 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.417 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:23:11.416819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.417 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.417 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.417 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.418 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.418 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.418 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.418 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.419 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.419 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.419 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.419 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.420 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.420 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.420 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.420 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.420 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.420 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:23:11.420440) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.421 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.421 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.421 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.422 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.422 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.422 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.422 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.422 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.422 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:23:11.422362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.422 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.423 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.423 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.423 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.423 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.424 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.424 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.424 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.424 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.425 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.425 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.425 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.425 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.426 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.426 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:23:11.425923) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.426 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.426 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.426 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.427 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.427 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.427 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.427 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.428 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.428 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 2012849032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:23:11.427862) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.428 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 99931447 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.428 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 145016237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.429 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.latency volume: 838942513 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.429 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.latency volume: 127847454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.429 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.latency volume: 233079678 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.429 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 1242591197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.430 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 114890665 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.430 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 113913681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.430 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.431 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.431 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.431 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.431 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.431 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.432 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.432 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:23:11.431759) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.432 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.432 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.433 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.433 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.433 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.434 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.434 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.435 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.435 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:23:11.435293) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.435 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.436 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.436 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.436 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.436 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.437 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.437 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.437 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.437 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.438 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.438 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.438 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.438 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.438 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.438 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.439 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:23:11.438847) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.439 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.439 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.439 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.440 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.440 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.440 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.440 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.440 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.440 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.441 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:23:11.440763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.441 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.441 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.441 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.441 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.442 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.442 14 DEBUG ceilometer.compute.pollsters [-] 221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.442 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.442 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.442 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.443 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.443 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.443 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.443 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.443 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.444 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.444 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.444 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.444 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.444 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.444 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.444 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.444 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.444 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.444 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.444 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.444 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.444 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.444 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.444 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.445 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.445 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.445 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.445 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.445 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.445 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:23:11.445 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:23:12 compute-0 nova_compute[185191]: 2026-01-27 15:23:12.754 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:13 compute-0 podman[243628]: 2026-01-27 15:23:13.33044407 +0000 UTC m=+0.067661501 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:23:15 compute-0 podman[243648]: 2026-01-27 15:23:15.335992477 +0000 UTC m=+0.081168574 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, version=9.6, vcs-type=git, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 15:23:15 compute-0 podman[243646]: 2026-01-27 15:23:15.34203625 +0000 UTC m=+0.094196795 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 15:23:15 compute-0 podman[243647]: 2026-01-27 15:23:15.365318406 +0000 UTC m=+0.112805245 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller)
Jan 27 15:23:15 compute-0 nova_compute[185191]: 2026-01-27 15:23:15.575 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:17 compute-0 nova_compute[185191]: 2026-01-27 15:23:17.757 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:20 compute-0 nova_compute[185191]: 2026-01-27 15:23:20.579 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:22 compute-0 nova_compute[185191]: 2026-01-27 15:23:22.759 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:24 compute-0 podman[243709]: 2026-01-27 15:23:24.315052608 +0000 UTC m=+0.065785461 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 15:23:25 compute-0 nova_compute[185191]: 2026-01-27 15:23:25.581 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:27 compute-0 nova_compute[185191]: 2026-01-27 15:23:27.761 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:27 compute-0 nova_compute[185191]: 2026-01-27 15:23:27.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:23:27 compute-0 nova_compute[185191]: 2026-01-27 15:23:27.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:23:27 compute-0 nova_compute[185191]: 2026-01-27 15:23:27.983 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:23:27 compute-0 nova_compute[185191]: 2026-01-27 15:23:27.984 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:23:27 compute-0 nova_compute[185191]: 2026-01-27 15:23:27.984 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:23:27 compute-0 nova_compute[185191]: 2026-01-27 15:23:27.984 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.375 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.445 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.447 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.508 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.509 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.596 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.602 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.668 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.675 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.744 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.746 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.816 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.817 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.885 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.886 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.949 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:23:28 compute-0 nova_compute[185191]: 2026-01-27 15:23:28.955 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.016 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.018 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.077 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.078 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.170 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.171 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.233 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:23:29 compute-0 podman[243766]: 2026-01-27 15:23:29.331402347 +0000 UTC m=+0.077350302 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:23:29 compute-0 podman[243763]: 2026-01-27 15:23:29.342008642 +0000 UTC m=+0.094251927 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, distribution-scope=public, config_id=kepler, io.openshift.expose-services=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.622 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.624 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4766MB free_disk=72.37750625610352GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.624 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.625 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.718 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.718 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.718 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance d855a654-d263-4516-8382-efa129798a0d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.718 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.719 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:23:29 compute-0 podman[201073]: time="2026-01-27T15:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:23:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:23:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.807 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.826 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.866 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:23:29 compute-0 nova_compute[185191]: 2026-01-27 15:23:29.866 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:23:30 compute-0 nova_compute[185191]: 2026-01-27 15:23:30.585 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:31 compute-0 openstack_network_exporter[204239]: ERROR   15:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:23:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:23:31 compute-0 openstack_network_exporter[204239]: ERROR   15:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:23:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:23:31 compute-0 nova_compute[185191]: 2026-01-27 15:23:31.867 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:23:31 compute-0 nova_compute[185191]: 2026-01-27 15:23:31.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:23:31 compute-0 nova_compute[185191]: 2026-01-27 15:23:31.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:23:32 compute-0 nova_compute[185191]: 2026-01-27 15:23:32.763 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:32 compute-0 nova_compute[185191]: 2026-01-27 15:23:32.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:23:34 compute-0 podman[243807]: 2026-01-27 15:23:34.326488674 +0000 UTC m=+0.073289413 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:23:34 compute-0 nova_compute[185191]: 2026-01-27 15:23:34.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:23:34 compute-0 nova_compute[185191]: 2026-01-27 15:23:34.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:23:35 compute-0 nova_compute[185191]: 2026-01-27 15:23:35.198 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:23:35 compute-0 nova_compute[185191]: 2026-01-27 15:23:35.199 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:23:35 compute-0 nova_compute[185191]: 2026-01-27 15:23:35.199 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:23:35 compute-0 nova_compute[185191]: 2026-01-27 15:23:35.586 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:36 compute-0 nova_compute[185191]: 2026-01-27 15:23:36.909 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] Updating instance_info_cache with network_info: [{"id": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "address": "fa:16:3e:36:7d:58", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bcdea5a-f4", "ovs_interfaceid": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:23:37 compute-0 nova_compute[185191]: 2026-01-27 15:23:37.433 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:23:37 compute-0 nova_compute[185191]: 2026-01-27 15:23:37.434 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:23:37 compute-0 nova_compute[185191]: 2026-01-27 15:23:37.435 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:23:37 compute-0 nova_compute[185191]: 2026-01-27 15:23:37.766 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:37 compute-0 nova_compute[185191]: 2026-01-27 15:23:37.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:23:39 compute-0 nova_compute[185191]: 2026-01-27 15:23:39.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:23:40 compute-0 nova_compute[185191]: 2026-01-27 15:23:40.588 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:42 compute-0 nova_compute[185191]: 2026-01-27 15:23:42.769 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:44 compute-0 podman[243829]: 2026-01-27 15:23:44.307308829 +0000 UTC m=+0.061371162 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:23:45 compute-0 nova_compute[185191]: 2026-01-27 15:23:45.590 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:45 compute-0 sshd-session[243847]: Invalid user node from 2.57.122.238 port 43146
Jan 27 15:23:46 compute-0 podman[243851]: 2026-01-27 15:23:46.067893483 +0000 UTC m=+0.074903246 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=openstack_network_exporter, distribution-scope=public)
Jan 27 15:23:46 compute-0 podman[243849]: 2026-01-27 15:23:46.085303582 +0000 UTC m=+0.102977072 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute)
Jan 27 15:23:46 compute-0 sshd-session[243847]: Connection closed by invalid user node 2.57.122.238 port 43146 [preauth]
Jan 27 15:23:46 compute-0 podman[243850]: 2026-01-27 15:23:46.106210474 +0000 UTC m=+0.116020662 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 15:23:47 compute-0 nova_compute[185191]: 2026-01-27 15:23:47.772 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:50 compute-0 nova_compute[185191]: 2026-01-27 15:23:50.592 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:52 compute-0 nova_compute[185191]: 2026-01-27 15:23:52.775 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:55 compute-0 podman[243914]: 2026-01-27 15:23:55.32885503 +0000 UTC m=+0.080766194 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 15:23:55 compute-0 nova_compute[185191]: 2026-01-27 15:23:55.595 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:57 compute-0 nova_compute[185191]: 2026-01-27 15:23:57.779 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:23:59 compute-0 podman[201073]: time="2026-01-27T15:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:23:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:23:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4387 "" "Go-http-client/1.1"
Jan 27 15:24:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:00.238 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:24:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:00.239 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:24:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:00.239 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:24:00 compute-0 podman[243933]: 2026-01-27 15:24:00.328296614 +0000 UTC m=+0.081902045 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:24:00 compute-0 podman[243932]: 2026-01-27 15:24:00.341033756 +0000 UTC m=+0.097914785 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 15:24:00 compute-0 nova_compute[185191]: 2026-01-27 15:24:00.599 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:01 compute-0 openstack_network_exporter[204239]: ERROR   15:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:24:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:24:01 compute-0 openstack_network_exporter[204239]: ERROR   15:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:24:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:24:02 compute-0 nova_compute[185191]: 2026-01-27 15:24:02.782 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:05 compute-0 podman[243973]: 2026-01-27 15:24:05.320041401 +0000 UTC m=+0.075517087 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:24:05 compute-0 nova_compute[185191]: 2026-01-27 15:24:05.600 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:07 compute-0 nova_compute[185191]: 2026-01-27 15:24:07.786 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:10 compute-0 nova_compute[185191]: 2026-01-27 15:24:10.603 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:12 compute-0 nova_compute[185191]: 2026-01-27 15:24:12.789 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:14 compute-0 podman[243997]: 2026-01-27 15:24:14.766952544 +0000 UTC m=+0.087242030 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 15:24:15 compute-0 nova_compute[185191]: 2026-01-27 15:24:15.605 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:16 compute-0 podman[244015]: 2026-01-27 15:24:16.333109195 +0000 UTC m=+0.080500270 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052)
Jan 27 15:24:16 compute-0 podman[244017]: 2026-01-27 15:24:16.354963068 +0000 UTC m=+0.091133804 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:24:16 compute-0 podman[244016]: 2026-01-27 15:24:16.371037067 +0000 UTC m=+0.114241511 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 15:24:17 compute-0 nova_compute[185191]: 2026-01-27 15:24:17.790 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:20 compute-0 nova_compute[185191]: 2026-01-27 15:24:20.607 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:22 compute-0 nova_compute[185191]: 2026-01-27 15:24:22.793 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:25 compute-0 nova_compute[185191]: 2026-01-27 15:24:25.611 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:26 compute-0 podman[244078]: 2026-01-27 15:24:26.315055857 +0000 UTC m=+0.070626876 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 27 15:24:27 compute-0 nova_compute[185191]: 2026-01-27 15:24:27.796 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:28 compute-0 nova_compute[185191]: 2026-01-27 15:24:28.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:24:28 compute-0 nova_compute[185191]: 2026-01-27 15:24:28.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:24:28 compute-0 nova_compute[185191]: 2026-01-27 15:24:28.986 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:24:28 compute-0 nova_compute[185191]: 2026-01-27 15:24:28.986 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:24:28 compute-0 nova_compute[185191]: 2026-01-27 15:24:28.987 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:24:28 compute-0 nova_compute[185191]: 2026-01-27 15:24:28.987 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.116 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.186 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.189 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.255 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.257 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.322 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.324 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.399 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.413 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.481 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.482 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.569 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.570 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.631 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.632 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.729 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.736 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:24:29 compute-0 podman[201073]: time="2026-01-27T15:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:24:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:24:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4385 "" "Go-http-client/1.1"
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.806 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.808 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.871 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.873 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.932 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.933 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.990 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.997 185195 DEBUG nova.compute.manager [req-949de72f-964b-4b5e-a6b1-776675e63dfc req-01e487ee-bd3f-457f-bd32-68d338a171ba 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Received event network-changed-0828fa2e-a05a-47f8-aab3-325c1f3f2c06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.998 185195 DEBUG nova.compute.manager [req-949de72f-964b-4b5e-a6b1-776675e63dfc req-01e487ee-bd3f-457f-bd32-68d338a171ba 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Refreshing instance network info cache due to event network-changed-0828fa2e-a05a-47f8-aab3-325c1f3f2c06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.998 185195 DEBUG oslo_concurrency.lockutils [req-949de72f-964b-4b5e-a6b1-776675e63dfc req-01e487ee-bd3f-457f-bd32-68d338a171ba 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-221a9a46-46a7-4a1b-ad5b-5d1eca64c106" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.998 185195 DEBUG oslo_concurrency.lockutils [req-949de72f-964b-4b5e-a6b1-776675e63dfc req-01e487ee-bd3f-457f-bd32-68d338a171ba 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-221a9a46-46a7-4a1b-ad5b-5d1eca64c106" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:24:29 compute-0 nova_compute[185191]: 2026-01-27 15:24:29.998 185195 DEBUG nova.network.neutron [req-949de72f-964b-4b5e-a6b1-776675e63dfc req-01e487ee-bd3f-457f-bd32-68d338a171ba 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Refreshing network info cache for port 0828fa2e-a05a-47f8-aab3-325c1f3f2c06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.379 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.380 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4758MB free_disk=72.37738037109375GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.380 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.380 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.614 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.619 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.619 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.620 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance d855a654-d263-4516-8382-efa129798a0d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.620 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.620 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.709 185195 DEBUG oslo_concurrency.lockutils [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.710 185195 DEBUG oslo_concurrency.lockutils [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.710 185195 DEBUG oslo_concurrency.lockutils [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.710 185195 DEBUG oslo_concurrency.lockutils [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.713 185195 DEBUG oslo_concurrency.lockutils [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.716 185195 INFO nova.compute.manager [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Terminating instance
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.718 185195 DEBUG nova.compute.manager [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.736 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.770 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:24:30 compute-0 kernel: tap0828fa2e-a0 (unregistering): left promiscuous mode
Jan 27 15:24:30 compute-0 NetworkManager[56090]: <info>  [1769527470.8076] device (tap0828fa2e-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.816 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.817 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.826 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:30 compute-0 ovn_controller[97541]: 2026-01-27T15:24:30Z|00054|binding|INFO|Releasing lport 0828fa2e-a05a-47f8-aab3-325c1f3f2c06 from this chassis (sb_readonly=0)
Jan 27 15:24:30 compute-0 ovn_controller[97541]: 2026-01-27T15:24:30Z|00055|binding|INFO|Setting lport 0828fa2e-a05a-47f8-aab3-325c1f3f2c06 down in Southbound
Jan 27 15:24:30 compute-0 ovn_controller[97541]: 2026-01-27T15:24:30Z|00056|binding|INFO|Removing iface tap0828fa2e-a0 ovn-installed in OVS
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.831 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:30.844 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:42:e6 192.168.0.205'], port_security=['fa:16:3e:ff:42:e6 192.168.0.205'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-smi76etv33tn-yssw4loy7awz-x7cghs3h76zg-port-yufms5xkxqrs', 'neutron:cidrs': '192.168.0.205/24', 'neutron:device_id': '221a9a46-46a7-4a1b-ad5b-5d1eca64c106', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7e37fe5-6354-4f61-95d0-78632be96811', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-smi76etv33tn-yssw4loy7awz-x7cghs3h76zg-port-yufms5xkxqrs', 'neutron:project_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'neutron:revision_number': '4', 'neutron:security_group_ids': '812ec3a5-800e-4a9a-a5c1-7429aedf7716', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=764c6ac9-6147-480d-b23c-048fbe883747, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=0828fa2e-a05a-47f8-aab3-325c1f3f2c06) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:24:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:30.845 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 0828fa2e-a05a-47f8-aab3-325c1f3f2c06 in datapath d7e37fe5-6354-4f61-95d0-78632be96811 unbound from our chassis
Jan 27 15:24:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:30.846 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7e37fe5-6354-4f61-95d0-78632be96811
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.859 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:30.876 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[952ce3d1-e788-4ce6-903d-b256a9454ef1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:24:30 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Jan 27 15:24:30 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 24.007s CPU time.
Jan 27 15:24:30 compute-0 systemd-machined[156506]: Machine qemu-3-instance-00000003 terminated.
Jan 27 15:24:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:30.924 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[133844ad-3c4e-4296-a3b2-67f958cba41d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:24:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:30.928 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb79cad-5d48-4d81-9244-7e2508ac5071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:24:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:30.956 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:24:30 compute-0 nova_compute[185191]: 2026-01-27 15:24:30.955 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:30 compute-0 podman[244137]: 2026-01-27 15:24:30.959006175 +0000 UTC m=+0.107173522 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=base rhel9, version=9.4, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, config_id=kepler, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, vcs-type=git, architecture=x86_64, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 15:24:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:30.970 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[7903aab9-8974-44df-920e-393a67e188ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:24:30 compute-0 podman[244138]: 2026-01-27 15:24:30.975241229 +0000 UTC m=+0.132447547 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:24:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:30.994 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3575a06a-1cfe-4735-9290-d784a74f0b28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7e37fe5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:72:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 420463, 'reachable_time': 23578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244193, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:24:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:31.011 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0dedf5-d489-4ba5-abdd-f7c1e6c7a10f]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapd7e37fe5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420478, 'tstamp': 420478}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244201, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7e37fe5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420481, 'tstamp': 420481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244201, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:24:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:31.013 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7e37fe5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.014 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.022 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:31.022 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7e37fe5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:24:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:31.023 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:24:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:31.023 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7e37fe5-60, col_values=(('external_ids', {'iface-id': 'd4262905-2cdc-4929-a155-db8204d90ca2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:24:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:31.024 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:24:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:31.025 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.038 185195 INFO nova.virt.libvirt.driver [-] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Instance destroyed successfully.
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.039 185195 DEBUG nova.objects.instance [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lazy-loading 'resources' on Instance uuid 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.072 185195 DEBUG nova.virt.libvirt.vif [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:16:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-etv33tn-yssw4loy7awz-x7cghs3h76zg-vnf-qosycueqkm5v',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-etv33tn-yssw4loy7awz-x7cghs3h76zg-vnf-qosycueqkm5v',id=3,image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:16:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='92e45285-9077-420c-bb23-df5c16dca6b3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd88ca4062da4fb9bedb3a0002a43c12',ramdisk_id='',reservation_id='r-stizfz0m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T15:16:52Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT02ODAwODM2MTA3NTkwMzgxNzQxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTY4MDA4MzYxMDc1OTAzODE3NDE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NjgwMDgzNjEwNzU5MDM4MTc0MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTY4MDA4MzYxMDc1OTAzODE3NDE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT02ODAwODM2MTA3NTkwMzgxNzQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT02ODAwODM2MTA3NTkwMzgxNzQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 27 15:24:31 compute-0 nova_compute[185191]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NjgwMDgzNjEwNzU5MDM4MTc0MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTY4MDA4MzYxMDc1OTAzODE3NDE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT02ODAwODM2MTA3NTkwMzgxNzQxPT0tLQo=',user_id='24260fb24da44b10b598f9c822c026b8',uuid=221a9a46-46a7-4a1b-ad5b-5d1eca64c106,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "address": "fa:16:3e:ff:42:e6", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0828fa2e-a0", "ovs_interfaceid": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.073 185195 DEBUG nova.network.os_vif_util [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converting VIF {"id": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "address": "fa:16:3e:ff:42:e6", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0828fa2e-a0", "ovs_interfaceid": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.074 185195 DEBUG nova.network.os_vif_util [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:42:e6,bridge_name='br-int',has_traffic_filtering=True,id=0828fa2e-a05a-47f8-aab3-325c1f3f2c06,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0828fa2e-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.075 185195 DEBUG os_vif [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:42:e6,bridge_name='br-int',has_traffic_filtering=True,id=0828fa2e-a05a-47f8-aab3-325c1f3f2c06,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0828fa2e-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.077 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.078 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0828fa2e-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.080 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.082 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.087 185195 INFO os_vif [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:42:e6,bridge_name='br-int',has_traffic_filtering=True,id=0828fa2e-a05a-47f8-aab3-325c1f3f2c06,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0828fa2e-a0')
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.088 185195 INFO nova.virt.libvirt.driver [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Deleting instance files /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106_del
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.089 185195 INFO nova.virt.libvirt.driver [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Deletion of /var/lib/nova/instances/221a9a46-46a7-4a1b-ad5b-5d1eca64c106_del complete
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.120 185195 DEBUG nova.compute.manager [req-01dc062b-a471-4216-9b9c-936c077321a0 req-ede8a9b9-bd73-462b-90ae-9bad9993e410 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Received event network-vif-unplugged-0828fa2e-a05a-47f8-aab3-325c1f3f2c06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.120 185195 DEBUG oslo_concurrency.lockutils [req-01dc062b-a471-4216-9b9c-936c077321a0 req-ede8a9b9-bd73-462b-90ae-9bad9993e410 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.121 185195 DEBUG oslo_concurrency.lockutils [req-01dc062b-a471-4216-9b9c-936c077321a0 req-ede8a9b9-bd73-462b-90ae-9bad9993e410 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.121 185195 DEBUG oslo_concurrency.lockutils [req-01dc062b-a471-4216-9b9c-936c077321a0 req-ede8a9b9-bd73-462b-90ae-9bad9993e410 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.121 185195 DEBUG nova.compute.manager [req-01dc062b-a471-4216-9b9c-936c077321a0 req-ede8a9b9-bd73-462b-90ae-9bad9993e410 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] No waiting events found dispatching network-vif-unplugged-0828fa2e-a05a-47f8-aab3-325c1f3f2c06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.121 185195 DEBUG nova.compute.manager [req-01dc062b-a471-4216-9b9c-936c077321a0 req-ede8a9b9-bd73-462b-90ae-9bad9993e410 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Received event network-vif-unplugged-0828fa2e-a05a-47f8-aab3-325c1f3f2c06 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.269 185195 INFO nova.compute.manager [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Took 0.55 seconds to destroy the instance on the hypervisor.
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.270 185195 DEBUG oslo.service.loopingcall [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.270 185195 DEBUG nova.compute.manager [-] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.270 185195 DEBUG nova.network.neutron [-] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 15:24:31 compute-0 rsyslogd[235702]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 15:24:31.072 185195 DEBUG nova.virt.libvirt.vif [None req-f058e214-1d [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 15:24:31 compute-0 openstack_network_exporter[204239]: ERROR   15:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:24:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:24:31 compute-0 openstack_network_exporter[204239]: ERROR   15:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:24:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.584 185195 DEBUG nova.network.neutron [req-949de72f-964b-4b5e-a6b1-776675e63dfc req-01e487ee-bd3f-457f-bd32-68d338a171ba 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Updated VIF entry in instance network info cache for port 0828fa2e-a05a-47f8-aab3-325c1f3f2c06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.585 185195 DEBUG nova.network.neutron [req-949de72f-964b-4b5e-a6b1-776675e63dfc req-01e487ee-bd3f-457f-bd32-68d338a171ba 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Updating instance_info_cache with network_info: [{"id": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "address": "fa:16:3e:ff:42:e6", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.205", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0828fa2e-a0", "ovs_interfaceid": "0828fa2e-a05a-47f8-aab3-325c1f3f2c06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:24:31 compute-0 nova_compute[185191]: 2026-01-27 15:24:31.633 185195 DEBUG oslo_concurrency.lockutils [req-949de72f-964b-4b5e-a6b1-776675e63dfc req-01e487ee-bd3f-457f-bd32-68d338a171ba 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-221a9a46-46a7-4a1b-ad5b-5d1eca64c106" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:24:32 compute-0 nova_compute[185191]: 2026-01-27 15:24:32.820 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:24:32 compute-0 nova_compute[185191]: 2026-01-27 15:24:32.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:24:32 compute-0 nova_compute[185191]: 2026-01-27 15:24:32.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:24:33 compute-0 nova_compute[185191]: 2026-01-27 15:24:33.267 185195 DEBUG nova.compute.manager [req-74c61016-3f77-4d5e-8002-991eb360d793 req-41cf5aa4-12aa-46e8-8d7d-cb0bb1316fab 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Received event network-vif-plugged-0828fa2e-a05a-47f8-aab3-325c1f3f2c06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:24:33 compute-0 nova_compute[185191]: 2026-01-27 15:24:33.268 185195 DEBUG oslo_concurrency.lockutils [req-74c61016-3f77-4d5e-8002-991eb360d793 req-41cf5aa4-12aa-46e8-8d7d-cb0bb1316fab 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:24:33 compute-0 nova_compute[185191]: 2026-01-27 15:24:33.268 185195 DEBUG oslo_concurrency.lockutils [req-74c61016-3f77-4d5e-8002-991eb360d793 req-41cf5aa4-12aa-46e8-8d7d-cb0bb1316fab 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:24:33 compute-0 nova_compute[185191]: 2026-01-27 15:24:33.269 185195 DEBUG oslo_concurrency.lockutils [req-74c61016-3f77-4d5e-8002-991eb360d793 req-41cf5aa4-12aa-46e8-8d7d-cb0bb1316fab 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:24:33 compute-0 nova_compute[185191]: 2026-01-27 15:24:33.269 185195 DEBUG nova.compute.manager [req-74c61016-3f77-4d5e-8002-991eb360d793 req-41cf5aa4-12aa-46e8-8d7d-cb0bb1316fab 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] No waiting events found dispatching network-vif-plugged-0828fa2e-a05a-47f8-aab3-325c1f3f2c06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:24:33 compute-0 nova_compute[185191]: 2026-01-27 15:24:33.270 185195 WARNING nova.compute.manager [req-74c61016-3f77-4d5e-8002-991eb360d793 req-41cf5aa4-12aa-46e8-8d7d-cb0bb1316fab 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Received unexpected event network-vif-plugged-0828fa2e-a05a-47f8-aab3-325c1f3f2c06 for instance with vm_state active and task_state deleting.
Jan 27 15:24:33 compute-0 nova_compute[185191]: 2026-01-27 15:24:33.940 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:24:34 compute-0 nova_compute[185191]: 2026-01-27 15:24:34.371 185195 DEBUG nova.network.neutron [-] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:24:34 compute-0 nova_compute[185191]: 2026-01-27 15:24:34.440 185195 INFO nova.compute.manager [-] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Took 3.17 seconds to deallocate network for instance.
Jan 27 15:24:34 compute-0 nova_compute[185191]: 2026-01-27 15:24:34.515 185195 DEBUG oslo_concurrency.lockutils [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:24:34 compute-0 nova_compute[185191]: 2026-01-27 15:24:34.516 185195 DEBUG oslo_concurrency.lockutils [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:24:34 compute-0 nova_compute[185191]: 2026-01-27 15:24:34.648 185195 DEBUG nova.compute.provider_tree [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:24:34 compute-0 nova_compute[185191]: 2026-01-27 15:24:34.679 185195 DEBUG nova.scheduler.client.report [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:24:34 compute-0 nova_compute[185191]: 2026-01-27 15:24:34.763 185195 DEBUG oslo_concurrency.lockutils [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:24:34 compute-0 nova_compute[185191]: 2026-01-27 15:24:34.842 185195 INFO nova.scheduler.client.report [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Deleted allocations for instance 221a9a46-46a7-4a1b-ad5b-5d1eca64c106
Jan 27 15:24:34 compute-0 nova_compute[185191]: 2026-01-27 15:24:34.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:24:34 compute-0 nova_compute[185191]: 2026-01-27 15:24:34.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:24:35 compute-0 nova_compute[185191]: 2026-01-27 15:24:35.097 185195 DEBUG oslo_concurrency.lockutils [None req-f058e214-1d50-47aa-89be-3ee764dc7225 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "221a9a46-46a7-4a1b-ad5b-5d1eca64c106" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:24:35 compute-0 nova_compute[185191]: 2026-01-27 15:24:35.615 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:36 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:24:36.027 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:24:36 compute-0 nova_compute[185191]: 2026-01-27 15:24:36.080 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:36 compute-0 podman[244210]: 2026-01-27 15:24:36.334104743 +0000 UTC m=+0.077113410 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:24:36 compute-0 nova_compute[185191]: 2026-01-27 15:24:36.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:24:36 compute-0 nova_compute[185191]: 2026-01-27 15:24:36.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:24:36 compute-0 nova_compute[185191]: 2026-01-27 15:24:36.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:24:37 compute-0 nova_compute[185191]: 2026-01-27 15:24:37.277 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:24:37 compute-0 nova_compute[185191]: 2026-01-27 15:24:37.277 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:24:37 compute-0 nova_compute[185191]: 2026-01-27 15:24:37.277 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:24:37 compute-0 nova_compute[185191]: 2026-01-27 15:24:37.278 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:24:40 compute-0 nova_compute[185191]: 2026-01-27 15:24:40.090 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updating instance_info_cache with network_info: [{"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:24:40 compute-0 nova_compute[185191]: 2026-01-27 15:24:40.155 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:24:40 compute-0 nova_compute[185191]: 2026-01-27 15:24:40.155 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:24:40 compute-0 nova_compute[185191]: 2026-01-27 15:24:40.156 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:24:40 compute-0 nova_compute[185191]: 2026-01-27 15:24:40.617 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:40 compute-0 nova_compute[185191]: 2026-01-27 15:24:40.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:24:41 compute-0 nova_compute[185191]: 2026-01-27 15:24:41.083 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:45 compute-0 podman[244235]: 2026-01-27 15:24:45.336724652 +0000 UTC m=+0.092666434 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 15:24:45 compute-0 nova_compute[185191]: 2026-01-27 15:24:45.621 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:46 compute-0 nova_compute[185191]: 2026-01-27 15:24:46.035 185195 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769527471.0340528, 221a9a46-46a7-4a1b-ad5b-5d1eca64c106 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:24:46 compute-0 nova_compute[185191]: 2026-01-27 15:24:46.036 185195 INFO nova.compute.manager [-] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] VM Stopped (Lifecycle Event)
Jan 27 15:24:46 compute-0 nova_compute[185191]: 2026-01-27 15:24:46.085 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:46 compute-0 nova_compute[185191]: 2026-01-27 15:24:46.122 185195 DEBUG nova.compute.manager [None req-91177468-56b4-4851-bc12-d50c09480490 - - - - - -] [instance: 221a9a46-46a7-4a1b-ad5b-5d1eca64c106] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:24:47 compute-0 podman[244254]: 2026-01-27 15:24:47.319802725 +0000 UTC m=+0.074658884 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:24:47 compute-0 podman[244256]: 2026-01-27 15:24:47.33047739 +0000 UTC m=+0.079067102 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, maintainer=Red Hat, Inc., architecture=x86_64)
Jan 27 15:24:47 compute-0 podman[244255]: 2026-01-27 15:24:47.348990124 +0000 UTC m=+0.099692002 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:24:50 compute-0 nova_compute[185191]: 2026-01-27 15:24:50.623 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:51 compute-0 nova_compute[185191]: 2026-01-27 15:24:51.088 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:55 compute-0 nova_compute[185191]: 2026-01-27 15:24:55.625 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:56 compute-0 nova_compute[185191]: 2026-01-27 15:24:56.090 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:24:57 compute-0 podman[244321]: 2026-01-27 15:24:57.319536004 +0000 UTC m=+0.070958236 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 15:24:59 compute-0 podman[201073]: time="2026-01-27T15:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:24:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:24:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 27 15:25:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:25:00.240 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:25:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:25:00.240 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:25:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:25:00.240 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:25:00 compute-0 nova_compute[185191]: 2026-01-27 15:25:00.627 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:01 compute-0 nova_compute[185191]: 2026-01-27 15:25:01.092 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:01 compute-0 podman[244340]: 2026-01-27 15:25:01.338796474 +0000 UTC m=+0.088565445 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, name=ubi9, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-type=git, container_name=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container)
Jan 27 15:25:01 compute-0 podman[244341]: 2026-01-27 15:25:01.345563815 +0000 UTC m=+0.089484120 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:25:01 compute-0 openstack_network_exporter[204239]: ERROR   15:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:25:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:25:01 compute-0 openstack_network_exporter[204239]: ERROR   15:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:25:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:25:05 compute-0 nova_compute[185191]: 2026-01-27 15:25:05.629 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:06 compute-0 nova_compute[185191]: 2026-01-27 15:25:06.094 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:06 compute-0 sshd-session[244382]: Accepted publickey for zuul from 38.129.56.249 port 56272 ssh2: RSA SHA256:hk2zKQl968MLJIxLeRmYoL19KGDGKglTIr8JoOEMMCU
Jan 27 15:25:06 compute-0 systemd-logind[820]: New session 30 of user zuul.
Jan 27 15:25:06 compute-0 systemd[1]: Started Session 30 of User zuul.
Jan 27 15:25:06 compute-0 sshd-session[244382]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 15:25:06 compute-0 podman[244384]: 2026-01-27 15:25:06.646602704 +0000 UTC m=+0.064162844 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:25:07 compute-0 sudo[244583]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdrbjhwywmhlgcktbncvrhajxnvznvux ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769527506.7369003-58398-65483659228447/AnsiballZ_command.py'
Jan 27 15:25:07 compute-0 sudo[244583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:25:07 compute-0 python3[244585]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:25:07 compute-0 sudo[244583]: pam_unix(sudo:session): session closed for user root
Jan 27 15:25:07 compute-0 ovn_controller[97541]: 2026-01-27T15:25:07Z|00057|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 27 15:25:10 compute-0 nova_compute[185191]: 2026-01-27 15:25:10.631 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.988 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.988 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:25:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:10.997 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd855a654-d263-4516-8382-efa129798a0d', 'name': 'vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {'metering.server_group': '92e45285-9077-420c-bb23-df5c16dca6b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.000 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8c4af6eb-340b-477f-83d2-11aa7ab0b9d3', 'name': 'test_0', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.000 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.000 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.000 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.000 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.002 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:25:11.000774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.075 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 4376430048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.076 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 12457092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.076 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 nova_compute[185191]: 2026-01-27 15:25:11.095 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.148 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 3771884583 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.148 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 11291751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.149 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.149 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.149 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.149 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.150 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.150 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.150 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.150 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.150 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:25:11.150208) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.150 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.151 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.151 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.151 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.151 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.152 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.152 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.152 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.152 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.152 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.152 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.153 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:25:11.152570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.176 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.176 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.177 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.200 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.200 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.200 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.201 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.201 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.202 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:25:11.202066) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.206 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.209 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.210 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.210 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.210 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.210 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.210 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.211 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:25:11.210959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.211 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.211 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.211 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.211 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.211 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.212 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.212 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.212 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.213 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.213 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.213 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.213 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.213 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:25:11.212043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.213 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.213 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.214 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.214 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.214 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.214 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.214 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.214 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:25:11.213421) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.215 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:25:11.214598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.234 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/cpu volume: 35960000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.253 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/cpu volume: 44010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.253 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.253 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.254 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.254 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.254 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.254 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.254 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.254 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.254 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:25:11.254287) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.255 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.255 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.255 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.255 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.256 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.256 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.256 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.257 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.257 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:25:11.255974) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.257 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.257 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.257 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.257 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.257 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/memory.usage volume: 48.921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.258 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.258 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:25:11.257701) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.258 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.258 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.258 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.259 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.259 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.259 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.259 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.259 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:25:11.259100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.259 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.259 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.260 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.260 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.260 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.260 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.260 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.260 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.260 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:25:11.260424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.260 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.261 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.261 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.261 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.261 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.261 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.261 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.261 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:25:11.261551) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.261 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.262 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.262 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.262 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.262 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.262 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.262 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.263 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:25:11.262712) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.263 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.263 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.263 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.263 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.264 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.264 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.264 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.264 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:25:11.264058) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.264 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.265 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.265 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.265 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.265 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.266 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.266 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.266 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.266 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.266 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:25:11.266411) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.266 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.267 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.267 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.267 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.267 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.267 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.268 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.268 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:25:11.267787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.268 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.268 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.268 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.268 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.269 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.269 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.269 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.269 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.270 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.270 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.270 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.270 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:25:11.270119) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.270 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.271 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.271 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.271 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.271 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.271 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.271 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 2012849032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.271 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 99931447 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.272 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 145016237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.272 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:25:11.271483) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.272 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 1242591197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.272 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 114890665 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.273 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 113913681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.273 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.273 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.273 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.274 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.274 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.274 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.274 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:25:11.274139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.274 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.274 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.275 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.275 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.275 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.275 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.276 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.276 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.276 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.276 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.276 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.276 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.276 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.277 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:25:11.276439) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.277 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.277 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.277 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.277 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.278 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.278 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.278 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.278 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.278 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.278 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.278 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.279 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.279 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.279 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.279 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.279 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.279 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.279 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.280 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.280 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:25:11.278457) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.280 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:25:11.279583) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.280 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.280 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.280 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.281 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.281 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.281 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.281 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.281 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.281 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.284 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.284 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.284 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:25:11.284 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:25:15 compute-0 nova_compute[185191]: 2026-01-27 15:25:15.632 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:16 compute-0 nova_compute[185191]: 2026-01-27 15:25:16.097 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:16 compute-0 podman[244626]: 2026-01-27 15:25:16.338409882 +0000 UTC m=+0.086215563 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 27 15:25:16 compute-0 nova_compute[185191]: 2026-01-27 15:25:16.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:25:18 compute-0 podman[244645]: 2026-01-27 15:25:18.314684292 +0000 UTC m=+0.067507083 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 15:25:18 compute-0 podman[244647]: 2026-01-27 15:25:18.327783022 +0000 UTC m=+0.073498023 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Jan 27 15:25:18 compute-0 podman[244646]: 2026-01-27 15:25:18.350390696 +0000 UTC m=+0.098213223 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 15:25:20 compute-0 nova_compute[185191]: 2026-01-27 15:25:20.635 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:21 compute-0 nova_compute[185191]: 2026-01-27 15:25:21.100 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:25 compute-0 nova_compute[185191]: 2026-01-27 15:25:25.637 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.103 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.243 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "073b423b-ac2a-4123-bbf7-ee7affea7627" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.244 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "073b423b-ac2a-4123-bbf7-ee7affea7627" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.264 185195 DEBUG nova.compute.manager [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.353 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.354 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.367 185195 DEBUG nova.virt.hardware [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.368 185195 INFO nova.compute.claims [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Claim successful on node compute-0.ctlplane.example.com
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.556 185195 DEBUG nova.compute.provider_tree [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.578 185195 DEBUG nova.scheduler.client.report [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.605 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.606 185195 DEBUG nova.compute.manager [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.661 185195 DEBUG nova.compute.manager [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.716 185195 INFO nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.756 185195 DEBUG nova.compute.manager [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.845 185195 DEBUG nova.compute.manager [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.846 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.847 185195 INFO nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Creating image(s)
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.847 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "/var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.848 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.848 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.848 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "6cfa0c50405f22bddeb2f4c2b9e121870dd7feac" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:25:26 compute-0 nova_compute[185191]: 2026-01-27 15:25:26.849 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "6cfa0c50405f22bddeb2f4c2b9e121870dd7feac" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:25:27 compute-0 nova_compute[185191]: 2026-01-27 15:25:27.961 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:25:27 compute-0 nova_compute[185191]: 2026-01-27 15:25:27.962 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 15:25:28 compute-0 nova_compute[185191]: 2026-01-27 15:25:28.011 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:28 compute-0 nova_compute[185191]: 2026-01-27 15:25:28.081 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac.part --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:28 compute-0 nova_compute[185191]: 2026-01-27 15:25:28.083 185195 DEBUG nova.virt.images [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] 81f2bcb0-98f5-4a9b-b8e3-d2bb92e1508b was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 27 15:25:28 compute-0 nova_compute[185191]: 2026-01-27 15:25:28.088 185195 DEBUG nova.privsep.utils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 27 15:25:28 compute-0 nova_compute[185191]: 2026-01-27 15:25:28.090 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac.part /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:28 compute-0 podman[244715]: 2026-01-27 15:25:28.33118418 +0000 UTC m=+0.077735246 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 27 15:25:28 compute-0 nova_compute[185191]: 2026-01-27 15:25:28.971 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.004 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.005 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.005 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.005 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.106 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.168 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.169 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.228 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.229 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.296 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.297 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.361 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.367 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.426 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.427 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.484 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.486 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.559 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.560 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.622 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:29 compute-0 podman[201073]: time="2026-01-27T15:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:25:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:25:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4390 "" "Go-http-client/1.1"
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.948 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.949 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4908MB free_disk=72.3651123046875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.950 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:25:29 compute-0 nova_compute[185191]: 2026-01-27 15:25:29.950 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.000 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac.part /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac.converted" returned: 0 in 1.910s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.003 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.063 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac.converted --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.064 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "6cfa0c50405f22bddeb2f4c2b9e121870dd7feac" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.081 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.081 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance d855a654-d263-4516-8382-efa129798a0d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.082 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 073b423b-ac2a-4123-bbf7-ee7affea7627 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.082 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.083 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.087 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.139 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.141 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "6cfa0c50405f22bddeb2f4c2b9e121870dd7feac" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.141 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "6cfa0c50405f22bddeb2f4c2b9e121870dd7feac" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.153 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.168 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing inventories for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.208 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.209 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac,backing_fmt=raw /var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.241 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating ProviderTree inventory for provider dbf037fd-3291-487b-ae9c-69178dae2ebc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.241 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.268 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing aggregate associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.304 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing trait associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, traits: HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_ACCELERATORS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.393 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.410 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.440 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.440 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.618 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac,backing_fmt=raw /var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk 1073741824" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.619 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "6cfa0c50405f22bddeb2f4c2b9e121870dd7feac" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.478s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.620 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.639 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.676 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.677 185195 DEBUG nova.virt.disk.api [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Checking if we can resize image /var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.678 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.734 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.735 185195 DEBUG nova.virt.disk.api [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Cannot resize image /var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.735 185195 DEBUG nova.objects.instance [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lazy-loading 'migration_context' on Instance uuid 073b423b-ac2a-4123-bbf7-ee7affea7627 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.825 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "/var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.825 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.826 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "/var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.839 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.898 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.900 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.900 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.913 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.975 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:30 compute-0 nova_compute[185191]: 2026-01-27 15:25:30.976 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.106 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.177 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk.eph0 1073741824" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.178 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.179 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.243 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.244 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.245 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Ensure instance console log exists: /var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.246 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.246 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.247 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.249 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T15:25:12Z,direct_url=<?>,disk_format='qcow2',id=81f2bcb0-98f5-4a9b-b8e3-d2bb92e1508b,min_disk=0,min_ram=0,name='fvt_testing_image',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T15:25:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': '81f2bcb0-98f5-4a9b-b8e3-d2bb92e1508b'}], 'ephemerals': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'size': 1, 'guest_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.257 185195 WARNING nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.269 185195 DEBUG nova.virt.libvirt.host [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.270 185195 DEBUG nova.virt.libvirt.host [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.276 185195 DEBUG nova.virt.libvirt.host [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.276 185195 DEBUG nova.virt.libvirt.host [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.278 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.278 185195 DEBUG nova.virt.hardware [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T15:25:21Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='9c086862-2fa5-472a-8aaa-4489efa88ddb',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T15:25:12Z,direct_url=<?>,disk_format='qcow2',id=81f2bcb0-98f5-4a9b-b8e3-d2bb92e1508b,min_disk=0,min_ram=0,name='fvt_testing_image',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T15:25:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.278 185195 DEBUG nova.virt.hardware [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.279 185195 DEBUG nova.virt.hardware [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.279 185195 DEBUG nova.virt.hardware [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.279 185195 DEBUG nova.virt.hardware [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.280 185195 DEBUG nova.virt.hardware [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.280 185195 DEBUG nova.virt.hardware [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.280 185195 DEBUG nova.virt.hardware [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.281 185195 DEBUG nova.virt.hardware [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.281 185195 DEBUG nova.virt.hardware [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.281 185195 DEBUG nova.virt.hardware [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.285 185195 DEBUG nova.objects.instance [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lazy-loading 'pci_devices' on Instance uuid 073b423b-ac2a-4123-bbf7-ee7affea7627 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.303 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] End _get_guest_xml xml=<domain type="kvm">
Jan 27 15:25:31 compute-0 nova_compute[185191]:   <uuid>073b423b-ac2a-4123-bbf7-ee7affea7627</uuid>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   <name>instance-00000005</name>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   <memory>524288</memory>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   <vcpu>1</vcpu>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   <metadata>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <nova:name>fvt_testing_server</nova:name>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <nova:creationTime>2026-01-27 15:25:31</nova:creationTime>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <nova:flavor name="fvt_testing_flavor">
Jan 27 15:25:31 compute-0 nova_compute[185191]:         <nova:memory>512</nova:memory>
Jan 27 15:25:31 compute-0 nova_compute[185191]:         <nova:disk>1</nova:disk>
Jan 27 15:25:31 compute-0 nova_compute[185191]:         <nova:swap>0</nova:swap>
Jan 27 15:25:31 compute-0 nova_compute[185191]:         <nova:ephemeral>1</nova:ephemeral>
Jan 27 15:25:31 compute-0 nova_compute[185191]:         <nova:vcpus>1</nova:vcpus>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       </nova:flavor>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <nova:owner>
Jan 27 15:25:31 compute-0 nova_compute[185191]:         <nova:user uuid="24260fb24da44b10b598f9c822c026b8">admin</nova:user>
Jan 27 15:25:31 compute-0 nova_compute[185191]:         <nova:project uuid="dd88ca4062da4fb9bedb3a0002a43c12">admin</nova:project>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       </nova:owner>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <nova:root type="image" uuid="81f2bcb0-98f5-4a9b-b8e3-d2bb92e1508b"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <nova:ports/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     </nova:instance>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   </metadata>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   <sysinfo type="smbios">
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <system>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <entry name="manufacturer">RDO</entry>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <entry name="product">OpenStack Compute</entry>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <entry name="serial">073b423b-ac2a-4123-bbf7-ee7affea7627</entry>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <entry name="uuid">073b423b-ac2a-4123-bbf7-ee7affea7627</entry>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <entry name="family">Virtual Machine</entry>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     </system>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   </sysinfo>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   <os>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <boot dev="hd"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <smbios mode="sysinfo"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   </os>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   <features>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <acpi/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <apic/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <vmcoreinfo/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   </features>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   <clock offset="utc">
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <timer name="hpet" present="no"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   </clock>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   <cpu mode="host-model" match="exact">
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <target dev="vda" bus="virtio"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk.eph0"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <target dev="vdb" bus="virtio"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <disk type="file" device="cdrom">
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk.config"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <target dev="sda" bus="sata"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <serial type="pty">
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <log file="/var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/console.log" append="off"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     </serial>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <video>
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     </video>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <input type="tablet" bus="usb"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <rng model="virtio">
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <backend model="random">/dev/urandom</backend>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <controller type="usb" index="0"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     <memballoon model="virtio">
Jan 27 15:25:31 compute-0 nova_compute[185191]:       <stats period="10"/>
Jan 27 15:25:31 compute-0 nova_compute[185191]:     </memballoon>
Jan 27 15:25:31 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:25:31 compute-0 nova_compute[185191]: </domain>
Jan 27 15:25:31 compute-0 nova_compute[185191]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.369 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.370 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.370 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.371 185195 INFO nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Using config drive
Jan 27 15:25:31 compute-0 openstack_network_exporter[204239]: ERROR   15:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:25:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:25:31 compute-0 openstack_network_exporter[204239]: ERROR   15:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:25:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.581 185195 INFO nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Creating config drive at /var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk.config
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.586 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsbkfitd3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.712 185195 DEBUG oslo_concurrency.processutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsbkfitd3" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:25:31 compute-0 systemd-machined[156506]: New machine qemu-5-instance-00000005.
Jan 27 15:25:31 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Jan 27 15:25:31 compute-0 podman[244799]: 2026-01-27 15:25:31.888236952 +0000 UTC m=+0.088925615 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, container_name=kepler, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, architecture=x86_64, name=ubi9, vendor=Red Hat, Inc., distribution-scope=public, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 27 15:25:31 compute-0 podman[244800]: 2026-01-27 15:25:31.895953728 +0000 UTC m=+0.097108754 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 15:25:31 compute-0 nova_compute[185191]: 2026-01-27 15:25:31.960 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.358 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769527532.3577442, 073b423b-ac2a-4123-bbf7-ee7affea7627 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.359 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] VM Resumed (Lifecycle Event)
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.361 185195 DEBUG nova.compute.manager [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.361 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.367 185195 INFO nova.virt.libvirt.driver [-] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Instance spawned successfully.
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.367 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.402 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.416 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.416 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.417 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.417 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.418 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.418 185195 DEBUG nova.virt.libvirt.driver [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.421 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.469 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.470 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769527532.3589401, 073b423b-ac2a-4123-bbf7-ee7affea7627 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.470 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] VM Started (Lifecycle Event)
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.503 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.508 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.514 185195 INFO nova.compute.manager [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Took 5.67 seconds to spawn the instance on the hypervisor.
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.514 185195 DEBUG nova.compute.manager [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.529 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.595 185195 INFO nova.compute.manager [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Took 6.28 seconds to build instance.
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.636 185195 DEBUG oslo_concurrency.lockutils [None req-1686bf5f-06c1-48c8-b2ec-597ac0afb6c5 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "073b423b-ac2a-4123-bbf7-ee7affea7627" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:25:32 compute-0 nova_compute[185191]: 2026-01-27 15:25:32.959 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:25:34 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 15:25:34 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 15:25:34 compute-0 nova_compute[185191]: 2026-01-27 15:25:34.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:25:34 compute-0 nova_compute[185191]: 2026-01-27 15:25:34.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:25:34 compute-0 nova_compute[185191]: 2026-01-27 15:25:34.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:25:35 compute-0 nova_compute[185191]: 2026-01-27 15:25:35.649 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:35 compute-0 nova_compute[185191]: 2026-01-27 15:25:35.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:25:36 compute-0 nova_compute[185191]: 2026-01-27 15:25:36.108 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:37 compute-0 podman[244881]: 2026-01-27 15:25:37.342610966 +0000 UTC m=+0.093899268 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:25:37 compute-0 nova_compute[185191]: 2026-01-27 15:25:37.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:25:37 compute-0 nova_compute[185191]: 2026-01-27 15:25:37.943 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:25:38 compute-0 nova_compute[185191]: 2026-01-27 15:25:38.466 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:25:38 compute-0 nova_compute[185191]: 2026-01-27 15:25:38.467 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:25:38 compute-0 nova_compute[185191]: 2026-01-27 15:25:38.467 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:25:39 compute-0 nova_compute[185191]: 2026-01-27 15:25:39.887 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] Updating instance_info_cache with network_info: [{"id": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "address": "fa:16:3e:36:7d:58", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bcdea5a-f4", "ovs_interfaceid": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:25:39 compute-0 nova_compute[185191]: 2026-01-27 15:25:39.920 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:25:39 compute-0 nova_compute[185191]: 2026-01-27 15:25:39.921 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:25:40 compute-0 nova_compute[185191]: 2026-01-27 15:25:40.647 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:40 compute-0 nova_compute[185191]: 2026-01-27 15:25:40.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:25:41 compute-0 nova_compute[185191]: 2026-01-27 15:25:41.110 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:41 compute-0 nova_compute[185191]: 2026-01-27 15:25:41.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:25:45 compute-0 nova_compute[185191]: 2026-01-27 15:25:45.649 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:46 compute-0 nova_compute[185191]: 2026-01-27 15:25:46.113 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:47 compute-0 podman[244904]: 2026-01-27 15:25:47.316835335 +0000 UTC m=+0.068772427 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 15:25:49 compute-0 podman[244925]: 2026-01-27 15:25:49.329487525 +0000 UTC m=+0.079053031 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vcs-type=git, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, name=ubi9-minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, release=1755695350, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter)
Jan 27 15:25:49 compute-0 podman[244923]: 2026-01-27 15:25:49.336517253 +0000 UTC m=+0.094709329 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052)
Jan 27 15:25:49 compute-0 podman[244924]: 2026-01-27 15:25:49.370212263 +0000 UTC m=+0.124884655 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:25:50 compute-0 nova_compute[185191]: 2026-01-27 15:25:50.652 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:51 compute-0 nova_compute[185191]: 2026-01-27 15:25:51.115 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:51 compute-0 nova_compute[185191]: 2026-01-27 15:25:51.699 185195 DEBUG oslo_concurrency.lockutils [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "073b423b-ac2a-4123-bbf7-ee7affea7627" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:25:51 compute-0 nova_compute[185191]: 2026-01-27 15:25:51.700 185195 DEBUG oslo_concurrency.lockutils [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "073b423b-ac2a-4123-bbf7-ee7affea7627" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:25:51 compute-0 nova_compute[185191]: 2026-01-27 15:25:51.700 185195 DEBUG oslo_concurrency.lockutils [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "073b423b-ac2a-4123-bbf7-ee7affea7627-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:25:51 compute-0 nova_compute[185191]: 2026-01-27 15:25:51.700 185195 DEBUG oslo_concurrency.lockutils [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "073b423b-ac2a-4123-bbf7-ee7affea7627-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:25:51 compute-0 nova_compute[185191]: 2026-01-27 15:25:51.701 185195 DEBUG oslo_concurrency.lockutils [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "073b423b-ac2a-4123-bbf7-ee7affea7627-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:25:51 compute-0 nova_compute[185191]: 2026-01-27 15:25:51.702 185195 INFO nova.compute.manager [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Terminating instance
Jan 27 15:25:51 compute-0 nova_compute[185191]: 2026-01-27 15:25:51.702 185195 DEBUG oslo_concurrency.lockutils [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "refresh_cache-073b423b-ac2a-4123-bbf7-ee7affea7627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:25:51 compute-0 nova_compute[185191]: 2026-01-27 15:25:51.703 185195 DEBUG oslo_concurrency.lockutils [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquired lock "refresh_cache-073b423b-ac2a-4123-bbf7-ee7affea7627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:25:51 compute-0 nova_compute[185191]: 2026-01-27 15:25:51.703 185195 DEBUG nova.network.neutron [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:25:52 compute-0 nova_compute[185191]: 2026-01-27 15:25:52.466 185195 DEBUG nova.network.neutron [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 15:25:52 compute-0 nova_compute[185191]: 2026-01-27 15:25:52.942 185195 DEBUG nova.network.neutron [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:25:52 compute-0 nova_compute[185191]: 2026-01-27 15:25:52.966 185195 DEBUG oslo_concurrency.lockutils [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Releasing lock "refresh_cache-073b423b-ac2a-4123-bbf7-ee7affea7627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:25:52 compute-0 nova_compute[185191]: 2026-01-27 15:25:52.966 185195 DEBUG nova.compute.manager [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 15:25:53 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 27 15:25:53 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 21.250s CPU time.
Jan 27 15:25:53 compute-0 systemd-machined[156506]: Machine qemu-5-instance-00000005 terminated.
Jan 27 15:25:53 compute-0 nova_compute[185191]: 2026-01-27 15:25:53.236 185195 INFO nova.virt.libvirt.driver [-] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Instance destroyed successfully.
Jan 27 15:25:53 compute-0 nova_compute[185191]: 2026-01-27 15:25:53.237 185195 DEBUG nova.objects.instance [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lazy-loading 'resources' on Instance uuid 073b423b-ac2a-4123-bbf7-ee7affea7627 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:25:53 compute-0 nova_compute[185191]: 2026-01-27 15:25:53.452 185195 INFO nova.virt.libvirt.driver [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Deleting instance files /var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627_del
Jan 27 15:25:53 compute-0 nova_compute[185191]: 2026-01-27 15:25:53.453 185195 INFO nova.virt.libvirt.driver [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Deletion of /var/lib/nova/instances/073b423b-ac2a-4123-bbf7-ee7affea7627_del complete
Jan 27 15:25:53 compute-0 nova_compute[185191]: 2026-01-27 15:25:53.612 185195 INFO nova.compute.manager [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Took 0.65 seconds to destroy the instance on the hypervisor.
Jan 27 15:25:53 compute-0 nova_compute[185191]: 2026-01-27 15:25:53.613 185195 DEBUG oslo.service.loopingcall [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 15:25:53 compute-0 nova_compute[185191]: 2026-01-27 15:25:53.614 185195 DEBUG nova.compute.manager [-] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 15:25:53 compute-0 nova_compute[185191]: 2026-01-27 15:25:53.614 185195 DEBUG nova.network.neutron [-] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 15:25:54 compute-0 nova_compute[185191]: 2026-01-27 15:25:54.473 185195 DEBUG nova.network.neutron [-] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 15:25:54 compute-0 nova_compute[185191]: 2026-01-27 15:25:54.495 185195 DEBUG nova.network.neutron [-] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:25:54 compute-0 nova_compute[185191]: 2026-01-27 15:25:54.512 185195 INFO nova.compute.manager [-] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Took 0.90 seconds to deallocate network for instance.
Jan 27 15:25:54 compute-0 nova_compute[185191]: 2026-01-27 15:25:54.562 185195 DEBUG oslo_concurrency.lockutils [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:25:54 compute-0 nova_compute[185191]: 2026-01-27 15:25:54.563 185195 DEBUG oslo_concurrency.lockutils [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:25:54 compute-0 nova_compute[185191]: 2026-01-27 15:25:54.665 185195 DEBUG nova.compute.provider_tree [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:25:54 compute-0 nova_compute[185191]: 2026-01-27 15:25:54.686 185195 DEBUG nova.scheduler.client.report [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:25:54 compute-0 nova_compute[185191]: 2026-01-27 15:25:54.717 185195 DEBUG oslo_concurrency.lockutils [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:25:54 compute-0 nova_compute[185191]: 2026-01-27 15:25:54.759 185195 INFO nova.scheduler.client.report [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Deleted allocations for instance 073b423b-ac2a-4123-bbf7-ee7affea7627
Jan 27 15:25:54 compute-0 nova_compute[185191]: 2026-01-27 15:25:54.848 185195 DEBUG oslo_concurrency.lockutils [None req-28e688d2-8657-4439-9823-cbeef6a3ec78 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "073b423b-ac2a-4123-bbf7-ee7affea7627" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:25:55 compute-0 nova_compute[185191]: 2026-01-27 15:25:55.656 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:56 compute-0 nova_compute[185191]: 2026-01-27 15:25:56.117 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:25:59 compute-0 podman[245001]: 2026-01-27 15:25:59.32007707 +0000 UTC m=+0.072518266 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:25:59 compute-0 podman[201073]: time="2026-01-27T15:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:25:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:25:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4385 "" "Go-http-client/1.1"
Jan 27 15:26:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:26:00.240 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:26:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:26:00.241 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:26:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:26:00.241 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:26:00 compute-0 nova_compute[185191]: 2026-01-27 15:26:00.657 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:01 compute-0 nova_compute[185191]: 2026-01-27 15:26:01.119 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:01 compute-0 sshd-session[245020]: Invalid user ubuntu from 2.57.122.238 port 46672
Jan 27 15:26:01 compute-0 openstack_network_exporter[204239]: ERROR   15:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:26:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:26:01 compute-0 openstack_network_exporter[204239]: ERROR   15:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:26:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:26:01 compute-0 sshd-session[245020]: Connection closed by invalid user ubuntu 2.57.122.238 port 46672 [preauth]
Jan 27 15:26:02 compute-0 podman[245023]: 2026-01-27 15:26:02.333422057 +0000 UTC m=+0.079983276 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:26:02 compute-0 podman[245022]: 2026-01-27 15:26:02.354632043 +0000 UTC m=+0.104166831 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.expose-services=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, config_id=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release-0.7.12=)
Jan 27 15:26:05 compute-0 nova_compute[185191]: 2026-01-27 15:26:05.659 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:06 compute-0 nova_compute[185191]: 2026-01-27 15:26:06.121 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:07 compute-0 sshd-session[244396]: Received disconnect from 38.129.56.249 port 56272:11: disconnected by user
Jan 27 15:26:07 compute-0 sshd-session[244396]: Disconnected from user zuul 38.129.56.249 port 56272
Jan 27 15:26:07 compute-0 sshd-session[244382]: pam_unix(sshd:session): session closed for user zuul
Jan 27 15:26:07 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Jan 27 15:26:07 compute-0 systemd-logind[820]: Session 30 logged out. Waiting for processes to exit.
Jan 27 15:26:07 compute-0 systemd-logind[820]: Removed session 30.
Jan 27 15:26:07 compute-0 podman[245063]: 2026-01-27 15:26:07.553185787 +0000 UTC m=+0.081421095 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:26:08 compute-0 nova_compute[185191]: 2026-01-27 15:26:08.234 185195 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769527553.2321827, 073b423b-ac2a-4123-bbf7-ee7affea7627 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:26:08 compute-0 nova_compute[185191]: 2026-01-27 15:26:08.236 185195 INFO nova.compute.manager [-] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] VM Stopped (Lifecycle Event)
Jan 27 15:26:08 compute-0 nova_compute[185191]: 2026-01-27 15:26:08.398 185195 DEBUG nova.compute.manager [None req-a6b48e3d-9255-4e83-8661-cb7fbb98954a - - - - - -] [instance: 073b423b-ac2a-4123-bbf7-ee7affea7627] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:26:10 compute-0 nova_compute[185191]: 2026-01-27 15:26:10.661 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:11 compute-0 nova_compute[185191]: 2026-01-27 15:26:11.124 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:15 compute-0 nova_compute[185191]: 2026-01-27 15:26:15.664 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:16 compute-0 nova_compute[185191]: 2026-01-27 15:26:16.126 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:18 compute-0 podman[245086]: 2026-01-27 15:26:18.323199249 +0000 UTC m=+0.072590329 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 15:26:20 compute-0 podman[245108]: 2026-01-27 15:26:20.373919356 +0000 UTC m=+0.094854643 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Jan 27 15:26:20 compute-0 podman[245106]: 2026-01-27 15:26:20.390202121 +0000 UTC m=+0.128931273 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Jan 27 15:26:20 compute-0 podman[245107]: 2026-01-27 15:26:20.402094629 +0000 UTC m=+0.132566151 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 15:26:20 compute-0 nova_compute[185191]: 2026-01-27 15:26:20.667 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:21 compute-0 nova_compute[185191]: 2026-01-27 15:26:21.130 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:25 compute-0 nova_compute[185191]: 2026-01-27 15:26:25.671 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:26 compute-0 nova_compute[185191]: 2026-01-27 15:26:26.133 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:28 compute-0 nova_compute[185191]: 2026-01-27 15:26:28.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.040 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.041 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.041 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.042 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.212 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.307 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.309 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.373 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.375 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.450 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.451 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.531 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.538 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.595 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.596 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.656 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.657 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.721 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.723 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:26:29 compute-0 podman[201073]: time="2026-01-27T15:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:26:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:26:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4389 "" "Go-http-client/1.1"
Jan 27 15:26:29 compute-0 nova_compute[185191]: 2026-01-27 15:26:29.788 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:26:30 compute-0 nova_compute[185191]: 2026-01-27 15:26:30.162 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:26:30 compute-0 nova_compute[185191]: 2026-01-27 15:26:30.164 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4937MB free_disk=72.37249374389648GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:26:30 compute-0 nova_compute[185191]: 2026-01-27 15:26:30.164 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:26:30 compute-0 nova_compute[185191]: 2026-01-27 15:26:30.165 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:26:30 compute-0 nova_compute[185191]: 2026-01-27 15:26:30.274 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:26:30 compute-0 nova_compute[185191]: 2026-01-27 15:26:30.274 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance d855a654-d263-4516-8382-efa129798a0d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:26:30 compute-0 nova_compute[185191]: 2026-01-27 15:26:30.275 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:26:30 compute-0 nova_compute[185191]: 2026-01-27 15:26:30.275 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:26:30 compute-0 podman[245191]: 2026-01-27 15:26:30.334033768 +0000 UTC m=+0.091653378 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 15:26:30 compute-0 nova_compute[185191]: 2026-01-27 15:26:30.393 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:26:30 compute-0 nova_compute[185191]: 2026-01-27 15:26:30.424 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:26:30 compute-0 nova_compute[185191]: 2026-01-27 15:26:30.472 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:26:30 compute-0 nova_compute[185191]: 2026-01-27 15:26:30.473 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:26:30 compute-0 nova_compute[185191]: 2026-01-27 15:26:30.674 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:31 compute-0 nova_compute[185191]: 2026-01-27 15:26:31.135 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:31 compute-0 openstack_network_exporter[204239]: ERROR   15:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:26:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:26:31 compute-0 openstack_network_exporter[204239]: ERROR   15:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:26:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:26:33 compute-0 podman[245210]: 2026-01-27 15:26:33.322931181 +0000 UTC m=+0.070812942 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-type=git, io.openshift.tags=base rhel9, name=ubi9, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 27 15:26:33 compute-0 podman[245211]: 2026-01-27 15:26:33.340975312 +0000 UTC m=+0.087686022 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:26:33 compute-0 nova_compute[185191]: 2026-01-27 15:26:33.469 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:26:33 compute-0 nova_compute[185191]: 2026-01-27 15:26:33.938 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:26:33 compute-0 nova_compute[185191]: 2026-01-27 15:26:33.985 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:26:35 compute-0 nova_compute[185191]: 2026-01-27 15:26:35.675 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:35 compute-0 nova_compute[185191]: 2026-01-27 15:26:35.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:26:35 compute-0 nova_compute[185191]: 2026-01-27 15:26:35.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:26:36 compute-0 nova_compute[185191]: 2026-01-27 15:26:36.138 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:36 compute-0 nova_compute[185191]: 2026-01-27 15:26:36.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:26:36 compute-0 nova_compute[185191]: 2026-01-27 15:26:36.943 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:26:37 compute-0 sshd-session[245251]: Accepted publickey for zuul from 38.129.56.249 port 33612 ssh2: RSA SHA256:hk2zKQl968MLJIxLeRmYoL19KGDGKglTIr8JoOEMMCU
Jan 27 15:26:37 compute-0 systemd-logind[820]: New session 31 of user zuul.
Jan 27 15:26:37 compute-0 systemd[1]: Started Session 31 of User zuul.
Jan 27 15:26:37 compute-0 sshd-session[245251]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 15:26:37 compute-0 podman[245253]: 2026-01-27 15:26:37.942037466 +0000 UTC m=+0.085829113 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:26:38 compute-0 sudo[245450]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onbpzuezoxqwcigoynajaiqbnsklkfws ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769527597.9802206-59165-87896037015590/AnsiballZ_command.py'
Jan 27 15:26:38 compute-0 sudo[245450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:26:38 compute-0 python3[245452]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:26:38 compute-0 sudo[245450]: pam_unix(sudo:session): session closed for user root
Jan 27 15:26:39 compute-0 nova_compute[185191]: 2026-01-27 15:26:39.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:26:39 compute-0 nova_compute[185191]: 2026-01-27 15:26:39.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:26:39 compute-0 nova_compute[185191]: 2026-01-27 15:26:39.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:26:40 compute-0 nova_compute[185191]: 2026-01-27 15:26:40.534 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:26:40 compute-0 nova_compute[185191]: 2026-01-27 15:26:40.535 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:26:40 compute-0 nova_compute[185191]: 2026-01-27 15:26:40.535 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:26:40 compute-0 nova_compute[185191]: 2026-01-27 15:26:40.536 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:26:40 compute-0 nova_compute[185191]: 2026-01-27 15:26:40.678 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:41 compute-0 nova_compute[185191]: 2026-01-27 15:26:41.140 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:42 compute-0 nova_compute[185191]: 2026-01-27 15:26:42.654 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updating instance_info_cache with network_info: [{"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:26:42 compute-0 nova_compute[185191]: 2026-01-27 15:26:42.741 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:26:42 compute-0 nova_compute[185191]: 2026-01-27 15:26:42.742 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:26:42 compute-0 nova_compute[185191]: 2026-01-27 15:26:42.742 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:26:42 compute-0 nova_compute[185191]: 2026-01-27 15:26:42.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:26:45 compute-0 nova_compute[185191]: 2026-01-27 15:26:45.680 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:46 compute-0 nova_compute[185191]: 2026-01-27 15:26:46.142 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:47 compute-0 sudo[245664]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysdokpmhicizxrfflhqjdssscuwfbfns ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769527606.7194023-59330-182299809690091/AnsiballZ_command.py'
Jan 27 15:26:47 compute-0 sudo[245664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:26:47 compute-0 python3[245666]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:26:47 compute-0 sudo[245664]: pam_unix(sudo:session): session closed for user root
Jan 27 15:26:49 compute-0 podman[245705]: 2026-01-27 15:26:49.305205054 +0000 UTC m=+0.063762074 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 15:26:50 compute-0 nova_compute[185191]: 2026-01-27 15:26:50.683 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:51 compute-0 nova_compute[185191]: 2026-01-27 15:26:51.146 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:51 compute-0 podman[245725]: 2026-01-27 15:26:51.320581878 +0000 UTC m=+0.073080942 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:26:51 compute-0 podman[245727]: 2026-01-27 15:26:51.325942771 +0000 UTC m=+0.073547714 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, version=9.6, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc.)
Jan 27 15:26:51 compute-0 podman[245726]: 2026-01-27 15:26:51.352805178 +0000 UTC m=+0.103533365 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 15:26:55 compute-0 nova_compute[185191]: 2026-01-27 15:26:55.686 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:56 compute-0 nova_compute[185191]: 2026-01-27 15:26:56.148 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:26:58 compute-0 sudo[245959]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpywqesekiwpoeilhualflimdhpynfsv ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769527618.2916007-59490-148730168690814/AnsiballZ_command.py'
Jan 27 15:26:58 compute-0 sudo[245959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:26:59 compute-0 python3[245961]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:26:59 compute-0 sudo[245959]: pam_unix(sudo:session): session closed for user root
Jan 27 15:26:59 compute-0 podman[201073]: time="2026-01-27T15:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:26:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:26:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4387 "" "Go-http-client/1.1"
Jan 27 15:27:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:27:00.242 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:27:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:27:00.242 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:27:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:27:00.243 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:27:00 compute-0 nova_compute[185191]: 2026-01-27 15:27:00.689 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:01 compute-0 nova_compute[185191]: 2026-01-27 15:27:01.151 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:01 compute-0 podman[245998]: 2026-01-27 15:27:01.328264959 +0000 UTC m=+0.078711812 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:27:01 compute-0 openstack_network_exporter[204239]: ERROR   15:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:27:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:27:01 compute-0 openstack_network_exporter[204239]: ERROR   15:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:27:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:27:04 compute-0 podman[246019]: 2026-01-27 15:27:04.319006272 +0000 UTC m=+0.069193478 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:27:04 compute-0 podman[246018]: 2026-01-27 15:27:04.335476381 +0000 UTC m=+0.088326759 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., container_name=kepler, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, release-0.7.12=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, managed_by=edpm_ansible, architecture=x86_64, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30)
Jan 27 15:27:05 compute-0 nova_compute[185191]: 2026-01-27 15:27:05.692 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:06 compute-0 nova_compute[185191]: 2026-01-27 15:27:06.154 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:08 compute-0 podman[246058]: 2026-01-27 15:27:08.308522688 +0000 UTC m=+0.068380766 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:27:10 compute-0 nova_compute[185191]: 2026-01-27 15:27:10.693 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.988 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.990 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:27:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:10.998 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd855a654-d263-4516-8382-efa129798a0d', 'name': 'vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {'metering.server_group': '92e45285-9077-420c-bb23-df5c16dca6b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.001 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8c4af6eb-340b-477f-83d2-11aa7ab0b9d3', 'name': 'test_0', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.001 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.001 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.001 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.002 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:27:11.002071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.079 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 4376430048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.079 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 12457092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.080 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.147 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 3771884583 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.148 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 11291751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.148 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.150 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.150 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.150 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.151 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.151 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.152 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.152 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.152 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:27:11.152107) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.153 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.154 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.154 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.155 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.155 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.157 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.157 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 nova_compute[185191]: 2026-01-27 15:27:11.158 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.158 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.158 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:27:11.158390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.181 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.181 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.182 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.215 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.216 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.216 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.217 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.217 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.217 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.217 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.218 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.218 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:27:11.218278) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.223 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.227 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.228 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.228 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.228 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.229 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.229 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.229 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.230 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:27:11.229709) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.230 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.230 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.230 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.231 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.231 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.231 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:27:11.231557) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.232 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.232 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.233 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.233 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.233 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.233 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.233 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:27:11.233719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.233 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.234 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.234 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.235 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.235 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.235 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.235 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.236 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.236 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:27:11.236007) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.266 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/cpu volume: 37190000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.294 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/cpu volume: 45290000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.295 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.296 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.296 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.297 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.297 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.297 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:27:11.297445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.298 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.299 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.299 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.300 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.300 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:27:11.300495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.300 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.301 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.302 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.303 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.303 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.303 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/memory.usage volume: 48.921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:27:11.303593) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.304 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.304 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.305 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.305 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.306 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.306 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:27:11.306316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.306 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.307 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.308 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.309 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.309 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.309 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.310 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:27:11.309969) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.310 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.310 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.311 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.311 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.312 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.312 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.312 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.313 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.313 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.314 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:27:11.312156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.314 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.314 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.314 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.314 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.315 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:27:11.314366) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.316 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.316 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.316 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:27:11.316372) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.316 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.317 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.317 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.317 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.318 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.318 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.318 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.319 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.319 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.319 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.319 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.320 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:27:11.319750) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.320 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.320 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.321 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.321 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.321 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.322 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:27:11.322148) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.322 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.323 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.323 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.324 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.324 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.325 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.325 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.325 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:27:11.325618) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.326 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.326 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.326 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.327 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.327 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.327 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.327 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:27:11.327463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.328 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 2012849032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.328 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 99931447 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.328 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 145016237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.329 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 1242591197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.329 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 114890665 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.329 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 113913681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.330 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.330 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.331 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.331 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:27:11.331193) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.331 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.331 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.332 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.332 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.332 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.333 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.333 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.334 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.334 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:27:11.334471) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.334 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.335 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.335 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.335 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.336 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.336 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.337 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.337 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.338 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:27:11.337840) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.338 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.339 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.339 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.339 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.339 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.339 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.340 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.340 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.340 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.341 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.341 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:27:11.339800) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.341 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.342 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.342 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:27:11.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:27:15 compute-0 sudo[246254]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msodgwlmulpwxahdpxefsytycuwrbkfy ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769527634.8488972-59712-216261564029455/AnsiballZ_command.py'
Jan 27 15:27:15 compute-0 sudo[246254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:27:15 compute-0 python3[246256]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 15:27:15 compute-0 sudo[246254]: pam_unix(sudo:session): session closed for user root
Jan 27 15:27:15 compute-0 nova_compute[185191]: 2026-01-27 15:27:15.695 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:16 compute-0 nova_compute[185191]: 2026-01-27 15:27:16.160 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:20 compute-0 podman[246296]: 2026-01-27 15:27:20.346607665 +0000 UTC m=+0.085465442 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:27:20 compute-0 nova_compute[185191]: 2026-01-27 15:27:20.697 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:21 compute-0 nova_compute[185191]: 2026-01-27 15:27:21.163 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:22 compute-0 podman[246315]: 2026-01-27 15:27:22.330311203 +0000 UTC m=+0.076721429 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 15:27:22 compute-0 podman[246316]: 2026-01-27 15:27:22.376844856 +0000 UTC m=+0.120769806 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Jan 27 15:27:22 compute-0 podman[246317]: 2026-01-27 15:27:22.386263827 +0000 UTC m=+0.121474424 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 15:27:25 compute-0 nova_compute[185191]: 2026-01-27 15:27:25.702 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:26 compute-0 nova_compute[185191]: 2026-01-27 15:27:26.165 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:29 compute-0 sshd-session[246380]: Connection closed by 45.148.10.240 port 43430
Jan 27 15:27:29 compute-0 podman[201073]: time="2026-01-27T15:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:27:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:27:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4388 "" "Go-http-client/1.1"
Jan 27 15:27:29 compute-0 nova_compute[185191]: 2026-01-27 15:27:29.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:27:29 compute-0 nova_compute[185191]: 2026-01-27 15:27:29.975 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:27:29 compute-0 nova_compute[185191]: 2026-01-27 15:27:29.976 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:27:29 compute-0 nova_compute[185191]: 2026-01-27 15:27:29.976 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:27:29 compute-0 nova_compute[185191]: 2026-01-27 15:27:29.976 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.068 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.135 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.136 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.198 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.199 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.272 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.274 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.336 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.343 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.406 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.407 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.475 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.476 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.549 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.550 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.611 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.704 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.981 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.983 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4928MB free_disk=72.37196350097656GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.984 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:27:30 compute-0 nova_compute[185191]: 2026-01-27 15:27:30.984 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:27:31 compute-0 nova_compute[185191]: 2026-01-27 15:27:31.067 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:27:31 compute-0 nova_compute[185191]: 2026-01-27 15:27:31.068 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance d855a654-d263-4516-8382-efa129798a0d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:27:31 compute-0 nova_compute[185191]: 2026-01-27 15:27:31.068 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:27:31 compute-0 nova_compute[185191]: 2026-01-27 15:27:31.069 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:27:31 compute-0 nova_compute[185191]: 2026-01-27 15:27:31.119 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:27:31 compute-0 nova_compute[185191]: 2026-01-27 15:27:31.135 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:27:31 compute-0 nova_compute[185191]: 2026-01-27 15:27:31.137 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:27:31 compute-0 nova_compute[185191]: 2026-01-27 15:27:31.138 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:27:31 compute-0 nova_compute[185191]: 2026-01-27 15:27:31.166 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:31 compute-0 openstack_network_exporter[204239]: ERROR   15:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:27:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:27:31 compute-0 openstack_network_exporter[204239]: ERROR   15:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:27:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:27:31 compute-0 podman[246405]: 2026-01-27 15:27:31.819725519 +0000 UTC m=+0.078268031 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 27 15:27:34 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 15:27:34 compute-0 podman[246425]: 2026-01-27 15:27:34.622057951 +0000 UTC m=+0.057428164 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:27:34 compute-0 podman[246424]: 2026-01-27 15:27:34.638109189 +0000 UTC m=+0.074777377 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 27 15:27:35 compute-0 nova_compute[185191]: 2026-01-27 15:27:35.134 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:27:35 compute-0 nova_compute[185191]: 2026-01-27 15:27:35.135 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:27:35 compute-0 nova_compute[185191]: 2026-01-27 15:27:35.706 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:35 compute-0 nova_compute[185191]: 2026-01-27 15:27:35.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:27:36 compute-0 nova_compute[185191]: 2026-01-27 15:27:36.169 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:36 compute-0 nova_compute[185191]: 2026-01-27 15:27:36.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:27:37 compute-0 nova_compute[185191]: 2026-01-27 15:27:37.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:27:37 compute-0 nova_compute[185191]: 2026-01-27 15:27:37.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:27:39 compute-0 podman[246467]: 2026-01-27 15:27:39.302621405 +0000 UTC m=+0.061832611 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:27:39 compute-0 nova_compute[185191]: 2026-01-27 15:27:39.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:27:39 compute-0 nova_compute[185191]: 2026-01-27 15:27:39.946 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:27:40 compute-0 nova_compute[185191]: 2026-01-27 15:27:40.421 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:27:40 compute-0 nova_compute[185191]: 2026-01-27 15:27:40.421 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:27:40 compute-0 nova_compute[185191]: 2026-01-27 15:27:40.422 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:27:40 compute-0 nova_compute[185191]: 2026-01-27 15:27:40.708 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:41 compute-0 nova_compute[185191]: 2026-01-27 15:27:41.172 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:43 compute-0 nova_compute[185191]: 2026-01-27 15:27:43.665 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] Updating instance_info_cache with network_info: [{"id": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "address": "fa:16:3e:36:7d:58", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bcdea5a-f4", "ovs_interfaceid": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:27:43 compute-0 nova_compute[185191]: 2026-01-27 15:27:43.793 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:27:43 compute-0 nova_compute[185191]: 2026-01-27 15:27:43.794 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:27:43 compute-0 nova_compute[185191]: 2026-01-27 15:27:43.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:27:43 compute-0 nova_compute[185191]: 2026-01-27 15:27:43.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:27:45 compute-0 nova_compute[185191]: 2026-01-27 15:27:45.711 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:46 compute-0 nova_compute[185191]: 2026-01-27 15:27:46.175 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:50 compute-0 nova_compute[185191]: 2026-01-27 15:27:50.713 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:51 compute-0 nova_compute[185191]: 2026-01-27 15:27:51.176 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:51 compute-0 podman[246491]: 2026-01-27 15:27:51.311146772 +0000 UTC m=+0.065637994 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent)
Jan 27 15:27:53 compute-0 podman[246518]: 2026-01-27 15:27:53.337248053 +0000 UTC m=+0.071141220 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, io.buildah.version=1.33.7)
Jan 27 15:27:53 compute-0 podman[246511]: 2026-01-27 15:27:53.353917778 +0000 UTC m=+0.104624244 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126)
Jan 27 15:27:53 compute-0 podman[246512]: 2026-01-27 15:27:53.399896105 +0000 UTC m=+0.141125248 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 15:27:55 compute-0 nova_compute[185191]: 2026-01-27 15:27:55.715 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:56 compute-0 nova_compute[185191]: 2026-01-27 15:27:56.179 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:27:59 compute-0 podman[201073]: time="2026-01-27T15:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:27:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:27:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Jan 27 15:28:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:28:00.243 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:28:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:28:00.244 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:28:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:28:00.245 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:28:00 compute-0 nova_compute[185191]: 2026-01-27 15:28:00.717 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:01 compute-0 nova_compute[185191]: 2026-01-27 15:28:01.181 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:01 compute-0 openstack_network_exporter[204239]: ERROR   15:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:28:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:28:01 compute-0 openstack_network_exporter[204239]: ERROR   15:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:28:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:28:02 compute-0 podman[246574]: 2026-01-27 15:28:02.306406169 +0000 UTC m=+0.061630857 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 15:28:05 compute-0 podman[246595]: 2026-01-27 15:28:05.311334589 +0000 UTC m=+0.067298867 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 27 15:28:05 compute-0 podman[246596]: 2026-01-27 15:28:05.332912095 +0000 UTC m=+0.086493740 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:28:05 compute-0 nova_compute[185191]: 2026-01-27 15:28:05.718 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:06 compute-0 nova_compute[185191]: 2026-01-27 15:28:06.184 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:10 compute-0 podman[246636]: 2026-01-27 15:28:10.297986796 +0000 UTC m=+0.059468189 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:28:10 compute-0 sshd-session[246634]: Invalid user validator from 2.57.122.238 port 54464
Jan 27 15:28:10 compute-0 sshd-session[246634]: Connection closed by invalid user validator 2.57.122.238 port 54464 [preauth]
Jan 27 15:28:10 compute-0 nova_compute[185191]: 2026-01-27 15:28:10.722 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:11 compute-0 nova_compute[185191]: 2026-01-27 15:28:11.186 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:15 compute-0 sshd-session[245260]: Received disconnect from 38.129.56.249 port 33612:11: disconnected by user
Jan 27 15:28:15 compute-0 sshd-session[245260]: Disconnected from user zuul 38.129.56.249 port 33612
Jan 27 15:28:15 compute-0 sshd-session[245251]: pam_unix(sshd:session): session closed for user zuul
Jan 27 15:28:15 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Jan 27 15:28:15 compute-0 systemd[1]: session-31.scope: Consumed 3.615s CPU time.
Jan 27 15:28:15 compute-0 systemd-logind[820]: Session 31 logged out. Waiting for processes to exit.
Jan 27 15:28:15 compute-0 systemd-logind[820]: Removed session 31.
Jan 27 15:28:15 compute-0 nova_compute[185191]: 2026-01-27 15:28:15.723 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:16 compute-0 nova_compute[185191]: 2026-01-27 15:28:16.187 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:20 compute-0 nova_compute[185191]: 2026-01-27 15:28:20.726 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:21 compute-0 nova_compute[185191]: 2026-01-27 15:28:21.190 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:22 compute-0 podman[246661]: 2026-01-27 15:28:22.307390228 +0000 UTC m=+0.057353926 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 15:28:24 compute-0 podman[246679]: 2026-01-27 15:28:24.318965592 +0000 UTC m=+0.073357564 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 15:28:24 compute-0 podman[246681]: 2026-01-27 15:28:24.327867311 +0000 UTC m=+0.074410643 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, managed_by=edpm_ansible)
Jan 27 15:28:24 compute-0 podman[246680]: 2026-01-27 15:28:24.38724883 +0000 UTC m=+0.128745267 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:28:25 compute-0 nova_compute[185191]: 2026-01-27 15:28:25.728 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:26 compute-0 nova_compute[185191]: 2026-01-27 15:28:26.191 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:29 compute-0 podman[201073]: time="2026-01-27T15:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:28:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:28:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4393 "" "Go-http-client/1.1"
Jan 27 15:28:29 compute-0 nova_compute[185191]: 2026-01-27 15:28:29.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:28:29 compute-0 nova_compute[185191]: 2026-01-27 15:28:29.983 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:28:29 compute-0 nova_compute[185191]: 2026-01-27 15:28:29.983 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:28:29 compute-0 nova_compute[185191]: 2026-01-27 15:28:29.984 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:28:29 compute-0 nova_compute[185191]: 2026-01-27 15:28:29.985 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.106 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.178 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.179 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.236 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.238 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.300 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.301 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.364 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.370 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.428 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.429 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.487 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.488 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.546 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.547 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.605 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.730 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.939 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.940 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4933MB free_disk=72.37196350097656GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.940 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:28:30 compute-0 nova_compute[185191]: 2026-01-27 15:28:30.941 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:28:31 compute-0 nova_compute[185191]: 2026-01-27 15:28:31.194 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:31 compute-0 nova_compute[185191]: 2026-01-27 15:28:31.272 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:28:31 compute-0 nova_compute[185191]: 2026-01-27 15:28:31.272 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance d855a654-d263-4516-8382-efa129798a0d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:28:31 compute-0 nova_compute[185191]: 2026-01-27 15:28:31.273 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:28:31 compute-0 nova_compute[185191]: 2026-01-27 15:28:31.273 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:28:31 compute-0 nova_compute[185191]: 2026-01-27 15:28:31.345 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:28:31 compute-0 nova_compute[185191]: 2026-01-27 15:28:31.372 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:28:31 compute-0 nova_compute[185191]: 2026-01-27 15:28:31.374 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:28:31 compute-0 nova_compute[185191]: 2026-01-27 15:28:31.374 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:28:31 compute-0 openstack_network_exporter[204239]: ERROR   15:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:28:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:28:31 compute-0 openstack_network_exporter[204239]: ERROR   15:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:28:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:28:33 compute-0 podman[246761]: 2026-01-27 15:28:33.303204149 +0000 UTC m=+0.061018814 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 15:28:35 compute-0 nova_compute[185191]: 2026-01-27 15:28:35.732 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:36 compute-0 nova_compute[185191]: 2026-01-27 15:28:36.196 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:36 compute-0 podman[246781]: 2026-01-27 15:28:36.324198638 +0000 UTC m=+0.076960620 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, vcs-type=git)
Jan 27 15:28:36 compute-0 podman[246782]: 2026-01-27 15:28:36.33323488 +0000 UTC m=+0.082165650 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:28:37 compute-0 nova_compute[185191]: 2026-01-27 15:28:37.369 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:28:37 compute-0 nova_compute[185191]: 2026-01-27 15:28:37.369 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:28:37 compute-0 nova_compute[185191]: 2026-01-27 15:28:37.397 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:28:37 compute-0 nova_compute[185191]: 2026-01-27 15:28:37.398 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:28:37 compute-0 nova_compute[185191]: 2026-01-27 15:28:37.398 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:28:38 compute-0 nova_compute[185191]: 2026-01-27 15:28:38.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:28:38 compute-0 nova_compute[185191]: 2026-01-27 15:28:38.943 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:28:39 compute-0 nova_compute[185191]: 2026-01-27 15:28:39.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:28:39 compute-0 nova_compute[185191]: 2026-01-27 15:28:39.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:28:39 compute-0 nova_compute[185191]: 2026-01-27 15:28:39.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:28:40 compute-0 nova_compute[185191]: 2026-01-27 15:28:40.647 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:28:40 compute-0 nova_compute[185191]: 2026-01-27 15:28:40.648 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:28:40 compute-0 nova_compute[185191]: 2026-01-27 15:28:40.648 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:28:40 compute-0 nova_compute[185191]: 2026-01-27 15:28:40.648 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:28:40 compute-0 nova_compute[185191]: 2026-01-27 15:28:40.734 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:41 compute-0 nova_compute[185191]: 2026-01-27 15:28:41.198 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:41 compute-0 podman[246824]: 2026-01-27 15:28:41.296553798 +0000 UTC m=+0.056199025 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:28:43 compute-0 nova_compute[185191]: 2026-01-27 15:28:43.028 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updating instance_info_cache with network_info: [{"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:28:43 compute-0 nova_compute[185191]: 2026-01-27 15:28:43.071 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:28:43 compute-0 nova_compute[185191]: 2026-01-27 15:28:43.072 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:28:44 compute-0 nova_compute[185191]: 2026-01-27 15:28:44.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:28:44 compute-0 nova_compute[185191]: 2026-01-27 15:28:44.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:28:45 compute-0 nova_compute[185191]: 2026-01-27 15:28:45.736 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:46 compute-0 nova_compute[185191]: 2026-01-27 15:28:46.201 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:50 compute-0 nova_compute[185191]: 2026-01-27 15:28:50.737 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:51 compute-0 nova_compute[185191]: 2026-01-27 15:28:51.202 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:53 compute-0 podman[246849]: 2026-01-27 15:28:53.337609203 +0000 UTC m=+0.095295551 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:28:55 compute-0 podman[246867]: 2026-01-27 15:28:55.342426526 +0000 UTC m=+0.082386206 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Jan 27 15:28:55 compute-0 podman[246869]: 2026-01-27 15:28:55.366362276 +0000 UTC m=+0.113180380 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter)
Jan 27 15:28:55 compute-0 podman[246868]: 2026-01-27 15:28:55.376273202 +0000 UTC m=+0.122466809 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 15:28:55 compute-0 nova_compute[185191]: 2026-01-27 15:28:55.739 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:56 compute-0 nova_compute[185191]: 2026-01-27 15:28:56.205 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:28:59 compute-0 podman[201073]: time="2026-01-27T15:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:28:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:28:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4386 "" "Go-http-client/1.1"
Jan 27 15:29:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:00.245 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:29:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:00.246 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:29:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:00.246 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:29:00 compute-0 nova_compute[185191]: 2026-01-27 15:29:00.741 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:01 compute-0 nova_compute[185191]: 2026-01-27 15:29:01.207 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:01 compute-0 openstack_network_exporter[204239]: ERROR   15:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:29:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:29:01 compute-0 openstack_network_exporter[204239]: ERROR   15:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:29:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:29:04 compute-0 podman[246929]: 2026-01-27 15:29:04.307081128 +0000 UTC m=+0.063719666 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:29:05 compute-0 nova_compute[185191]: 2026-01-27 15:29:05.743 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:06 compute-0 nova_compute[185191]: 2026-01-27 15:29:06.209 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:07 compute-0 podman[246949]: 2026-01-27 15:29:07.315995103 +0000 UTC m=+0.069849191 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, version=9.4, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, release-0.7.12=, config_id=kepler)
Jan 27 15:29:07 compute-0 podman[246950]: 2026-01-27 15:29:07.318630883 +0000 UTC m=+0.066757977 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:29:10 compute-0 nova_compute[185191]: 2026-01-27 15:29:10.745 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.989 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.989 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbac35f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.000 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd855a654-d263-4516-8382-efa129798a0d', 'name': 'vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {'metering.server_group': '92e45285-9077-420c-bb23-df5c16dca6b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.004 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8c4af6eb-340b-477f-83d2-11aa7ab0b9d3', 'name': 'test_0', 'flavor': {'id': '26a24ace-a5af-47b3-9314-7d2b9e74c6b8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2b336e4b-c98e-4b97-9f8f-b3290e6b6caf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'user_id': '24260fb24da44b10b598f9c822c026b8', 'hostId': '3ceda6555cc5777899d12334c8ffc2eada08918754ac5a04c50fe3bb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.005 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.005 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.005 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:29:11.005716) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.077 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 4376430048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.078 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 12457092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.078 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.157 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 3771884583 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.157 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 11291751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.158 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.158 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.158 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.158 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.159 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:29:11.158996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.159 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.159 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.160 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.160 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.160 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.161 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.161 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:29:11.161512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.188 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.189 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.189 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 nova_compute[185191]: 2026-01-27 15:29:11.211 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.220 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.221 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.221 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.222 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.222 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.223 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.223 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.223 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.223 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.224 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:29:11.223547) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.229 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.233 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.234 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.234 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.234 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.234 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.234 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.235 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.235 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:29:11.235052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.235 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.236 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.236 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.236 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.236 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.236 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.236 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:29:11.236461) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.237 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.237 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.238 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.238 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.238 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.238 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:29:11.238431) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.238 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.239 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.239 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.239 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.239 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.240 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.240 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.240 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.240 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:29:11.240203) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.266 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/cpu volume: 38330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.298 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/cpu volume: 46460000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.299 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.299 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.299 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.299 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.299 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.300 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:29:11.299783) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.300 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.300 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.301 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.301 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.301 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.301 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:29:11.301144) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.302 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:29:11.302493) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.302 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/memory.usage volume: 48.921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.303 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.303 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.303 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.303 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.304 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:29:11.303988) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.304 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.304 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.304 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.305 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.305 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.305 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.305 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.305 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.305 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.306 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:29:11.305401) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.306 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.306 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.306 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.306 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.307 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.307 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.307 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:29:11.306627) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.307 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.307 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.307 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.308 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.308 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.308 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:29:11.307907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.308 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.308 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.309 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.309 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.309 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.309 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:29:11.309138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.309 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.309 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.310 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.310 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.310 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.311 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.311 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.311 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.311 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.311 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.312 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.312 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.312 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:29:11.311540) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.312 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.313 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.313 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.313 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.313 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.313 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:29:11.312894) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.314 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.314 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.314 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.314 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.314 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.315 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.315 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.315 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:29:11.315006) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.315 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.315 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.316 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.316 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 2012849032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.316 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 99931447 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.316 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.latency volume: 145016237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.316 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 1242591197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.316 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 114890665 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.317 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.latency volume: 113913681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:29:11.316099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.317 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.317 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.318 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.318 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.318 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.318 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.318 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:29:11.318366) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.318 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.318 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.319 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.319 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.319 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.319 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.320 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.320 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.320 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.320 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.320 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.321 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:29:11.320789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.321 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.321 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.321 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.321 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.321 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.322 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.322 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.322 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.323 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.323 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.323 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:29:11.322986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.323 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.323 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.324 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.324 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.324 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:29:11.324375) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.325 14 DEBUG ceilometer.compute.pollsters [-] d855a654-d263-4516-8382-efa129798a0d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.325 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.325 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.325 14 DEBUG ceilometer.compute.pollsters [-] 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.326 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.326 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:29:11.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:29:12 compute-0 podman[246991]: 2026-01-27 15:29:12.345704386 +0000 UTC m=+0.102948876 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:29:15 compute-0 nova_compute[185191]: 2026-01-27 15:29:15.747 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:16 compute-0 nova_compute[185191]: 2026-01-27 15:29:16.214 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:16 compute-0 rsyslogd[235702]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 15:29:16 compute-0 rsyslogd[235702]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 15:29:20 compute-0 nova_compute[185191]: 2026-01-27 15:29:20.748 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:21 compute-0 nova_compute[185191]: 2026-01-27 15:29:21.216 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:24 compute-0 podman[247016]: 2026-01-27 15:29:24.308450876 +0000 UTC m=+0.058009363 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 27 15:29:25 compute-0 nova_compute[185191]: 2026-01-27 15:29:25.750 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:26 compute-0 nova_compute[185191]: 2026-01-27 15:29:26.219 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:26 compute-0 podman[247037]: 2026-01-27 15:29:26.321792809 +0000 UTC m=+0.073255672 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 15:29:26 compute-0 podman[247035]: 2026-01-27 15:29:26.321980294 +0000 UTC m=+0.075965604 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible)
Jan 27 15:29:26 compute-0 podman[247036]: 2026-01-27 15:29:26.35508555 +0000 UTC m=+0.108293219 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 27 15:29:26 compute-0 nova_compute[185191]: 2026-01-27 15:29:26.912 185195 DEBUG nova.compute.manager [req-3c5c19e9-45f0-4cfe-ad21-05ebb2e3ea11 req-f2271a85-ebb5-4aec-8004-c0777c1f2b5d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Received event network-changed-2bcdea5a-f4b9-4e61-9a89-5af70265faba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:29:26 compute-0 nova_compute[185191]: 2026-01-27 15:29:26.912 185195 DEBUG nova.compute.manager [req-3c5c19e9-45f0-4cfe-ad21-05ebb2e3ea11 req-f2271a85-ebb5-4aec-8004-c0777c1f2b5d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Refreshing instance network info cache due to event network-changed-2bcdea5a-f4b9-4e61-9a89-5af70265faba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:29:26 compute-0 nova_compute[185191]: 2026-01-27 15:29:26.913 185195 DEBUG oslo_concurrency.lockutils [req-3c5c19e9-45f0-4cfe-ad21-05ebb2e3ea11 req-f2271a85-ebb5-4aec-8004-c0777c1f2b5d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:29:26 compute-0 nova_compute[185191]: 2026-01-27 15:29:26.913 185195 DEBUG oslo_concurrency.lockutils [req-3c5c19e9-45f0-4cfe-ad21-05ebb2e3ea11 req-f2271a85-ebb5-4aec-8004-c0777c1f2b5d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:29:26 compute-0 nova_compute[185191]: 2026-01-27 15:29:26.913 185195 DEBUG nova.network.neutron [req-3c5c19e9-45f0-4cfe-ad21-05ebb2e3ea11 req-f2271a85-ebb5-4aec-8004-c0777c1f2b5d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Refreshing network info cache for port 2bcdea5a-f4b9-4e61-9a89-5af70265faba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.462 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:27.463 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:29:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:27.463 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.642 185195 DEBUG oslo_concurrency.lockutils [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "d855a654-d263-4516-8382-efa129798a0d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.642 185195 DEBUG oslo_concurrency.lockutils [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "d855a654-d263-4516-8382-efa129798a0d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.643 185195 DEBUG oslo_concurrency.lockutils [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "d855a654-d263-4516-8382-efa129798a0d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.643 185195 DEBUG oslo_concurrency.lockutils [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "d855a654-d263-4516-8382-efa129798a0d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.644 185195 DEBUG oslo_concurrency.lockutils [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "d855a654-d263-4516-8382-efa129798a0d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.645 185195 INFO nova.compute.manager [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Terminating instance
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.646 185195 DEBUG nova.compute.manager [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 15:29:27 compute-0 kernel: tap2bcdea5a-f4 (unregistering): left promiscuous mode
Jan 27 15:29:27 compute-0 NetworkManager[56090]: <info>  [1769527767.6821] device (tap2bcdea5a-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 15:29:27 compute-0 ovn_controller[97541]: 2026-01-27T15:29:27Z|00058|binding|INFO|Releasing lport 2bcdea5a-f4b9-4e61-9a89-5af70265faba from this chassis (sb_readonly=0)
Jan 27 15:29:27 compute-0 ovn_controller[97541]: 2026-01-27T15:29:27Z|00059|binding|INFO|Setting lport 2bcdea5a-f4b9-4e61-9a89-5af70265faba down in Southbound
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.692 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:27 compute-0 ovn_controller[97541]: 2026-01-27T15:29:27Z|00060|binding|INFO|Removing iface tap2bcdea5a-f4 ovn-installed in OVS
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.695 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:27.708 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:7d:58 192.168.0.20'], port_security=['fa:16:3e:36:7d:58 192.168.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-smi76etv33tn-v6zwyh72ilg3-valx525hdwsf-port-mbhcac6i36zf', 'neutron:cidrs': '192.168.0.20/24', 'neutron:device_id': 'd855a654-d263-4516-8382-efa129798a0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7e37fe5-6354-4f61-95d0-78632be96811', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-smi76etv33tn-v6zwyh72ilg3-valx525hdwsf-port-mbhcac6i36zf', 'neutron:project_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'neutron:revision_number': '4', 'neutron:security_group_ids': '812ec3a5-800e-4a9a-a5c1-7429aedf7716', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=764c6ac9-6147-480d-b23c-048fbe883747, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=2bcdea5a-f4b9-4e61-9a89-5af70265faba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.708 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:27.709 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 2bcdea5a-f4b9-4e61-9a89-5af70265faba in datapath d7e37fe5-6354-4f61-95d0-78632be96811 unbound from our chassis
Jan 27 15:29:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:27.711 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7e37fe5-6354-4f61-95d0-78632be96811
Jan 27 15:29:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:27.725 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1f795be9-8384-4097-8553-121ac3a4ada1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:29:27 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Jan 27 15:29:27 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 44.737s CPU time.
Jan 27 15:29:27 compute-0 systemd-machined[156506]: Machine qemu-4-instance-00000004 terminated.
Jan 27 15:29:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:27.760 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[a04e0ab4-2573-44db-86c6-17247463e62d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:29:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:27.765 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[91bfeea7-8d7b-42d2-bff3-b83e041d770d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:29:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:27.791 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[e363cf62-8004-45a8-891c-ae89bb4f46e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:29:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:27.809 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[48bd90c3-c077-4811-972c-d7fc51c0be0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7e37fe5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:72:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 420463, 'reachable_time': 18698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247110, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:29:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:27.825 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[49610d67-36b2-4d8e-9329-0f20d4f96cab]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapd7e37fe5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420478, 'tstamp': 420478}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247111, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7e37fe5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420481, 'tstamp': 420481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247111, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:29:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:27.826 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7e37fe5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.828 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.833 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:27.834 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7e37fe5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:29:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:27.834 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:29:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:27.835 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7e37fe5-60, col_values=(('external_ids', {'iface-id': 'd4262905-2cdc-4929-a155-db8204d90ca2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:29:27 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:27.835 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.940 185195 INFO nova.virt.libvirt.driver [-] [instance: d855a654-d263-4516-8382-efa129798a0d] Instance destroyed successfully.
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.942 185195 DEBUG nova.objects.instance [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lazy-loading 'resources' on Instance uuid d855a654-d263-4516-8382-efa129798a0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.957 185195 DEBUG nova.virt.libvirt.vif [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:18:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-etv33tn-v6zwyh72ilg3-valx525hdwsf-vnf-2xkstmqcn2oj',id=4,image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:18:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='92e45285-9077-420c-bb23-df5c16dca6b3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd88ca4062da4fb9bedb3a0002a43c12',ramdisk_id='',reservation_id='r-tvamciqz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T15:18:46Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00Mzk5NTY2ODcxMzExNzEwNzQxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQzOTk1NjY4NzEzMTE3MTA3NDE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDM5OTU2Njg3MTMxMTcxMDc0MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQzOTk1NjY4NzEzMTE3MTA3NDE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00Mzk5NTY2ODcxMzExNzEwNzQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00Mzk5NTY2ODcxMzExNzEwNzQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 27 15:29:27 compute-0 nova_compute[185191]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NDM5OTU2Njg3MTMxMTcxMDc0MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTQzOTk1NjY4NzEzMTE3MTA3NDE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT00Mzk5NTY2ODcxMzExNzEwNzQxPT0tLQo=',user_id='24260fb24da44b10b598f9c822c026b8',uuid=d855a654-d263-4516-8382-efa129798a0d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "address": "fa:16:3e:36:7d:58", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bcdea5a-f4", "ovs_interfaceid": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.957 185195 DEBUG nova.network.os_vif_util [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converting VIF {"id": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "address": "fa:16:3e:36:7d:58", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bcdea5a-f4", "ovs_interfaceid": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.958 185195 DEBUG nova.network.os_vif_util [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:7d:58,bridge_name='br-int',has_traffic_filtering=True,id=2bcdea5a-f4b9-4e61-9a89-5af70265faba,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2bcdea5a-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.958 185195 DEBUG os_vif [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:7d:58,bridge_name='br-int',has_traffic_filtering=True,id=2bcdea5a-f4b9-4e61-9a89-5af70265faba,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2bcdea5a-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.960 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.960 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bcdea5a-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.963 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.965 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.967 185195 INFO os_vif [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:7d:58,bridge_name='br-int',has_traffic_filtering=True,id=2bcdea5a-f4b9-4e61-9a89-5af70265faba,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2bcdea5a-f4')
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.968 185195 INFO nova.virt.libvirt.driver [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Deleting instance files /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d_del
Jan 27 15:29:27 compute-0 nova_compute[185191]: 2026-01-27 15:29:27.968 185195 INFO nova.virt.libvirt.driver [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Deletion of /var/lib/nova/instances/d855a654-d263-4516-8382-efa129798a0d_del complete
Jan 27 15:29:28 compute-0 nova_compute[185191]: 2026-01-27 15:29:28.023 185195 INFO nova.compute.manager [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Took 0.38 seconds to destroy the instance on the hypervisor.
Jan 27 15:29:28 compute-0 nova_compute[185191]: 2026-01-27 15:29:28.024 185195 DEBUG oslo.service.loopingcall [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 15:29:28 compute-0 nova_compute[185191]: 2026-01-27 15:29:28.024 185195 DEBUG nova.compute.manager [-] [instance: d855a654-d263-4516-8382-efa129798a0d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 15:29:28 compute-0 nova_compute[185191]: 2026-01-27 15:29:28.024 185195 DEBUG nova.network.neutron [-] [instance: d855a654-d263-4516-8382-efa129798a0d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 15:29:28 compute-0 rsyslogd[235702]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 15:29:27.957 185195 DEBUG nova.virt.libvirt.vif [None req-4f5eae7c-1c [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 15:29:28 compute-0 nova_compute[185191]: 2026-01-27 15:29:28.874 185195 DEBUG nova.network.neutron [req-3c5c19e9-45f0-4cfe-ad21-05ebb2e3ea11 req-f2271a85-ebb5-4aec-8004-c0777c1f2b5d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Updated VIF entry in instance network info cache for port 2bcdea5a-f4b9-4e61-9a89-5af70265faba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:29:28 compute-0 nova_compute[185191]: 2026-01-27 15:29:28.874 185195 DEBUG nova.network.neutron [req-3c5c19e9-45f0-4cfe-ad21-05ebb2e3ea11 req-f2271a85-ebb5-4aec-8004-c0777c1f2b5d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Updating instance_info_cache with network_info: [{"id": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "address": "fa:16:3e:36:7d:58", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bcdea5a-f4", "ovs_interfaceid": "2bcdea5a-f4b9-4e61-9a89-5af70265faba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:29:28 compute-0 nova_compute[185191]: 2026-01-27 15:29:28.896 185195 DEBUG oslo_concurrency.lockutils [req-3c5c19e9-45f0-4cfe-ad21-05ebb2e3ea11 req-f2271a85-ebb5-4aec-8004-c0777c1f2b5d 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-d855a654-d263-4516-8382-efa129798a0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:29:28 compute-0 nova_compute[185191]: 2026-01-27 15:29:28.983 185195 DEBUG nova.compute.manager [req-a2162d80-0bdc-46ff-abb8-520a9b03d3b1 req-077221c2-fd14-4bd1-85be-edf4a9ed8711 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Received event network-vif-unplugged-2bcdea5a-f4b9-4e61-9a89-5af70265faba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:29:28 compute-0 nova_compute[185191]: 2026-01-27 15:29:28.983 185195 DEBUG oslo_concurrency.lockutils [req-a2162d80-0bdc-46ff-abb8-520a9b03d3b1 req-077221c2-fd14-4bd1-85be-edf4a9ed8711 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "d855a654-d263-4516-8382-efa129798a0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:29:28 compute-0 nova_compute[185191]: 2026-01-27 15:29:28.984 185195 DEBUG oslo_concurrency.lockutils [req-a2162d80-0bdc-46ff-abb8-520a9b03d3b1 req-077221c2-fd14-4bd1-85be-edf4a9ed8711 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "d855a654-d263-4516-8382-efa129798a0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:29:28 compute-0 nova_compute[185191]: 2026-01-27 15:29:28.984 185195 DEBUG oslo_concurrency.lockutils [req-a2162d80-0bdc-46ff-abb8-520a9b03d3b1 req-077221c2-fd14-4bd1-85be-edf4a9ed8711 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "d855a654-d263-4516-8382-efa129798a0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:29:28 compute-0 nova_compute[185191]: 2026-01-27 15:29:28.984 185195 DEBUG nova.compute.manager [req-a2162d80-0bdc-46ff-abb8-520a9b03d3b1 req-077221c2-fd14-4bd1-85be-edf4a9ed8711 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] No waiting events found dispatching network-vif-unplugged-2bcdea5a-f4b9-4e61-9a89-5af70265faba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:29:28 compute-0 nova_compute[185191]: 2026-01-27 15:29:28.984 185195 DEBUG nova.compute.manager [req-a2162d80-0bdc-46ff-abb8-520a9b03d3b1 req-077221c2-fd14-4bd1-85be-edf4a9ed8711 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Received event network-vif-unplugged-2bcdea5a-f4b9-4e61-9a89-5af70265faba for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 15:29:29 compute-0 podman[201073]: time="2026-01-27T15:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:29:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:29:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4393 "" "Go-http-client/1.1"
Jan 27 15:29:29 compute-0 nova_compute[185191]: 2026-01-27 15:29:29.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:29:29 compute-0 nova_compute[185191]: 2026-01-27 15:29:29.967 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:29:29 compute-0 nova_compute[185191]: 2026-01-27 15:29:29.967 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:29:29 compute-0 nova_compute[185191]: 2026-01-27 15:29:29.968 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:29:29 compute-0 nova_compute[185191]: 2026-01-27 15:29:29.968 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.109 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.172 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.173 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.235 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.236 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.300 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.301 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.372 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.486 185195 DEBUG nova.network.neutron [-] [instance: d855a654-d263-4516-8382-efa129798a0d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.504 185195 INFO nova.compute.manager [-] [instance: d855a654-d263-4516-8382-efa129798a0d] Took 2.48 seconds to deallocate network for instance.
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.546 185195 DEBUG oslo_concurrency.lockutils [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.547 185195 DEBUG oslo_concurrency.lockutils [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.622 185195 DEBUG nova.compute.provider_tree [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.638 185195 DEBUG nova.scheduler.client.report [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.656 185195 DEBUG oslo_concurrency.lockutils [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.680 185195 INFO nova.scheduler.client.report [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Deleted allocations for instance d855a654-d263-4516-8382-efa129798a0d
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.751 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.759 185195 DEBUG oslo_concurrency.lockutils [None req-4f5eae7c-1c3c-4c93-b898-18a96efc8e57 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "d855a654-d263-4516-8382-efa129798a0d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.776 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.777 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5090MB free_disk=72.39498519897461GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.777 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.778 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.904 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.904 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.905 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.952 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:29:30 compute-0 nova_compute[185191]: 2026-01-27 15:29:30.973 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:29:31 compute-0 nova_compute[185191]: 2026-01-27 15:29:31.009 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:29:31 compute-0 nova_compute[185191]: 2026-01-27 15:29:31.010 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:29:31 compute-0 nova_compute[185191]: 2026-01-27 15:29:31.069 185195 DEBUG nova.compute.manager [req-d5e7f58d-7791-4bbb-9992-b361d605c7bd req-199d8428-0926-4c67-9b15-06b22fe5a804 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Received event network-vif-plugged-2bcdea5a-f4b9-4e61-9a89-5af70265faba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:29:31 compute-0 nova_compute[185191]: 2026-01-27 15:29:31.069 185195 DEBUG oslo_concurrency.lockutils [req-d5e7f58d-7791-4bbb-9992-b361d605c7bd req-199d8428-0926-4c67-9b15-06b22fe5a804 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "d855a654-d263-4516-8382-efa129798a0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:29:31 compute-0 nova_compute[185191]: 2026-01-27 15:29:31.069 185195 DEBUG oslo_concurrency.lockutils [req-d5e7f58d-7791-4bbb-9992-b361d605c7bd req-199d8428-0926-4c67-9b15-06b22fe5a804 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "d855a654-d263-4516-8382-efa129798a0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:29:31 compute-0 nova_compute[185191]: 2026-01-27 15:29:31.070 185195 DEBUG oslo_concurrency.lockutils [req-d5e7f58d-7791-4bbb-9992-b361d605c7bd req-199d8428-0926-4c67-9b15-06b22fe5a804 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "d855a654-d263-4516-8382-efa129798a0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:29:31 compute-0 nova_compute[185191]: 2026-01-27 15:29:31.070 185195 DEBUG nova.compute.manager [req-d5e7f58d-7791-4bbb-9992-b361d605c7bd req-199d8428-0926-4c67-9b15-06b22fe5a804 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] No waiting events found dispatching network-vif-plugged-2bcdea5a-f4b9-4e61-9a89-5af70265faba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:29:31 compute-0 nova_compute[185191]: 2026-01-27 15:29:31.070 185195 WARNING nova.compute.manager [req-d5e7f58d-7791-4bbb-9992-b361d605c7bd req-199d8428-0926-4c67-9b15-06b22fe5a804 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: d855a654-d263-4516-8382-efa129798a0d] Received unexpected event network-vif-plugged-2bcdea5a-f4b9-4e61-9a89-5af70265faba for instance with vm_state deleted and task_state None.
Jan 27 15:29:31 compute-0 openstack_network_exporter[204239]: ERROR   15:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:29:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:29:31 compute-0 openstack_network_exporter[204239]: ERROR   15:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:29:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:29:32 compute-0 nova_compute[185191]: 2026-01-27 15:29:32.963 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:33 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:33.465 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:29:35 compute-0 podman[247147]: 2026-01-27 15:29:35.312701606 +0000 UTC m=+0.064954259 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 15:29:35 compute-0 nova_compute[185191]: 2026-01-27 15:29:35.753 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:37 compute-0 nova_compute[185191]: 2026-01-27 15:29:37.965 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:38 compute-0 nova_compute[185191]: 2026-01-27 15:29:38.010 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:29:38 compute-0 nova_compute[185191]: 2026-01-27 15:29:38.010 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:29:38 compute-0 nova_compute[185191]: 2026-01-27 15:29:38.011 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:29:38 compute-0 nova_compute[185191]: 2026-01-27 15:29:38.011 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:29:38 compute-0 podman[247168]: 2026-01-27 15:29:38.318543349 +0000 UTC m=+0.062759711 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:29:38 compute-0 podman[247167]: 2026-01-27 15:29:38.324606851 +0000 UTC m=+0.073136458 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, architecture=x86_64, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:29:40 compute-0 nova_compute[185191]: 2026-01-27 15:29:40.755 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:40 compute-0 nova_compute[185191]: 2026-01-27 15:29:40.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:29:40 compute-0 nova_compute[185191]: 2026-01-27 15:29:40.943 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:29:40 compute-0 nova_compute[185191]: 2026-01-27 15:29:40.982 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:29:40 compute-0 nova_compute[185191]: 2026-01-27 15:29:40.983 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:29:40 compute-0 nova_compute[185191]: 2026-01-27 15:29:40.983 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:29:42 compute-0 nova_compute[185191]: 2026-01-27 15:29:42.937 185195 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769527767.9355865, d855a654-d263-4516-8382-efa129798a0d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:29:42 compute-0 nova_compute[185191]: 2026-01-27 15:29:42.938 185195 INFO nova.compute.manager [-] [instance: d855a654-d263-4516-8382-efa129798a0d] VM Stopped (Lifecycle Event)
Jan 27 15:29:42 compute-0 nova_compute[185191]: 2026-01-27 15:29:42.963 185195 DEBUG nova.compute.manager [None req-f34fe70b-757d-4202-8eff-9a8d5abcdb47 - - - - - -] [instance: d855a654-d263-4516-8382-efa129798a0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:29:42 compute-0 nova_compute[185191]: 2026-01-27 15:29:42.967 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:43 compute-0 podman[247210]: 2026-01-27 15:29:43.298004958 +0000 UTC m=+0.057311575 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:29:45 compute-0 nova_compute[185191]: 2026-01-27 15:29:45.758 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:45 compute-0 nova_compute[185191]: 2026-01-27 15:29:45.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.079 185195 DEBUG oslo_concurrency.lockutils [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.080 185195 DEBUG oslo_concurrency.lockutils [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.080 185195 DEBUG oslo_concurrency.lockutils [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.080 185195 DEBUG oslo_concurrency.lockutils [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.081 185195 DEBUG oslo_concurrency.lockutils [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.082 185195 INFO nova.compute.manager [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Terminating instance
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.083 185195 DEBUG nova.compute.manager [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 15:29:46 compute-0 kernel: tap4c1725b6-63 (unregistering): left promiscuous mode
Jan 27 15:29:46 compute-0 NetworkManager[56090]: <info>  [1769527786.1261] device (tap4c1725b6-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.138 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:46 compute-0 ovn_controller[97541]: 2026-01-27T15:29:46Z|00061|binding|INFO|Releasing lport 4c1725b6-637d-4572-927d-1137b3ba538c from this chassis (sb_readonly=0)
Jan 27 15:29:46 compute-0 ovn_controller[97541]: 2026-01-27T15:29:46Z|00062|binding|INFO|Setting lport 4c1725b6-637d-4572-927d-1137b3ba538c down in Southbound
Jan 27 15:29:46 compute-0 ovn_controller[97541]: 2026-01-27T15:29:46Z|00063|binding|INFO|Removing iface tap4c1725b6-63 ovn-installed in OVS
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.144 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:46.152 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:6e:c4 192.168.0.180'], port_security=['fa:16:3e:89:6e:c4 192.168.0.180'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.180/24', 'neutron:device_id': '8c4af6eb-340b-477f-83d2-11aa7ab0b9d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7e37fe5-6354-4f61-95d0-78632be96811', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd88ca4062da4fb9bedb3a0002a43c12', 'neutron:revision_number': '4', 'neutron:security_group_ids': '812ec3a5-800e-4a9a-a5c1-7429aedf7716', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=764c6ac9-6147-480d-b23c-048fbe883747, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=4c1725b6-637d-4572-927d-1137b3ba538c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:29:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:46.154 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 4c1725b6-637d-4572-927d-1137b3ba538c in datapath d7e37fe5-6354-4f61-95d0-78632be96811 unbound from our chassis
Jan 27 15:29:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:46.156 106793 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7e37fe5-6354-4f61-95d0-78632be96811, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 15:29:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:46.159 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8b6b7f-c19d-41e8-be8b-59b4a850dc98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:29:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:46.161 106793 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811 namespace which is not needed anymore
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.161 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:46 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 27 15:29:46 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 2min 49.696s CPU time.
Jan 27 15:29:46 compute-0 systemd-machined[156506]: Machine qemu-1-instance-00000001 terminated.
Jan 27 15:29:46 compute-0 neutron-haproxy-ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811[238733]: [NOTICE]   (238737) : haproxy version is 2.8.14-c23fe91
Jan 27 15:29:46 compute-0 neutron-haproxy-ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811[238733]: [NOTICE]   (238737) : path to executable is /usr/sbin/haproxy
Jan 27 15:29:46 compute-0 neutron-haproxy-ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811[238733]: [WARNING]  (238737) : Exiting Master process...
Jan 27 15:29:46 compute-0 neutron-haproxy-ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811[238733]: [ALERT]    (238737) : Current worker (238739) exited with code 143 (Terminated)
Jan 27 15:29:46 compute-0 neutron-haproxy-ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811[238733]: [WARNING]  (238737) : All workers exited. Exiting... (0)
Jan 27 15:29:46 compute-0 systemd[1]: libpod-642f8120ad0d77bd2706fb111c6656a127e11f7928cf7a920c0bb926d0d3e3ff.scope: Deactivated successfully.
Jan 27 15:29:46 compute-0 conmon[238733]: conmon 642f8120ad0d77bd2706 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-642f8120ad0d77bd2706fb111c6656a127e11f7928cf7a920c0bb926d0d3e3ff.scope/container/memory.events
Jan 27 15:29:46 compute-0 podman[247259]: 2026-01-27 15:29:46.354287684 +0000 UTC m=+0.069714387 container died 642f8120ad0d77bd2706fb111c6656a127e11f7928cf7a920c0bb926d0d3e3ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.381 185195 DEBUG nova.compute.manager [req-446024eb-ba07-40dc-ad39-7b9d36332c44 req-fd7bf2b2-42d9-406b-b01b-dc20a50855d3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Received event network-vif-unplugged-4c1725b6-637d-4572-927d-1137b3ba538c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.382 185195 DEBUG oslo_concurrency.lockutils [req-446024eb-ba07-40dc-ad39-7b9d36332c44 req-fd7bf2b2-42d9-406b-b01b-dc20a50855d3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.383 185195 DEBUG oslo_concurrency.lockutils [req-446024eb-ba07-40dc-ad39-7b9d36332c44 req-fd7bf2b2-42d9-406b-b01b-dc20a50855d3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.383 185195 DEBUG oslo_concurrency.lockutils [req-446024eb-ba07-40dc-ad39-7b9d36332c44 req-fd7bf2b2-42d9-406b-b01b-dc20a50855d3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.384 185195 DEBUG nova.compute.manager [req-446024eb-ba07-40dc-ad39-7b9d36332c44 req-fd7bf2b2-42d9-406b-b01b-dc20a50855d3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] No waiting events found dispatching network-vif-unplugged-4c1725b6-637d-4572-927d-1137b3ba538c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.384 185195 DEBUG nova.compute.manager [req-446024eb-ba07-40dc-ad39-7b9d36332c44 req-fd7bf2b2-42d9-406b-b01b-dc20a50855d3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Received event network-vif-unplugged-4c1725b6-637d-4572-927d-1137b3ba538c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.394 185195 INFO nova.virt.libvirt.driver [-] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Instance destroyed successfully.
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.395 185195 DEBUG nova.objects.instance [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lazy-loading 'resources' on Instance uuid 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:29:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-642f8120ad0d77bd2706fb111c6656a127e11f7928cf7a920c0bb926d0d3e3ff-userdata-shm.mount: Deactivated successfully.
Jan 27 15:29:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-da103f3c94a21cbe889112284815c180c05f87935ef6db3522aedd242b18909f-merged.mount: Deactivated successfully.
Jan 27 15:29:46 compute-0 podman[247259]: 2026-01-27 15:29:46.413130598 +0000 UTC m=+0.128557301 container cleanup 642f8120ad0d77bd2706fb111c6656a127e11f7928cf7a920c0bb926d0d3e3ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.415 185195 DEBUG nova.virt.libvirt.vif [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:10:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:10:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd88ca4062da4fb9bedb3a0002a43c12',ramdisk_id='',reservation_id='r-e3iaxvta',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='2b336e4b-c98e-4b97-9f8f-b3290e6b6caf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T15:10:27Z,user_data=None,user_id='24260fb24da44b10b598f9c822c026b8',uuid=8c4af6eb-340b-477f-83d2-11aa7ab0b9d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.417 185195 DEBUG nova.network.os_vif_util [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converting VIF {"id": "4c1725b6-637d-4572-927d-1137b3ba538c", "address": "fa:16:3e:89:6e:c4", "network": {"id": "d7e37fe5-6354-4f61-95d0-78632be96811", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.180", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd88ca4062da4fb9bedb3a0002a43c12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c1725b6-63", "ovs_interfaceid": "4c1725b6-637d-4572-927d-1137b3ba538c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.418 185195 DEBUG nova.network.os_vif_util [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:6e:c4,bridge_name='br-int',has_traffic_filtering=True,id=4c1725b6-637d-4572-927d-1137b3ba538c,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c1725b6-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.419 185195 DEBUG os_vif [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:6e:c4,bridge_name='br-int',has_traffic_filtering=True,id=4c1725b6-637d-4572-927d-1137b3ba538c,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c1725b6-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.422 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.423 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c1725b6-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:29:46 compute-0 systemd[1]: libpod-conmon-642f8120ad0d77bd2706fb111c6656a127e11f7928cf7a920c0bb926d0d3e3ff.scope: Deactivated successfully.
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.430 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.434 185195 INFO os_vif [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:6e:c4,bridge_name='br-int',has_traffic_filtering=True,id=4c1725b6-637d-4572-927d-1137b3ba538c,network=Network(d7e37fe5-6354-4f61-95d0-78632be96811),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c1725b6-63')
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.435 185195 INFO nova.virt.libvirt.driver [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Deleting instance files /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3_del
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.436 185195 INFO nova.virt.libvirt.driver [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Deletion of /var/lib/nova/instances/8c4af6eb-340b-477f-83d2-11aa7ab0b9d3_del complete
Jan 27 15:29:46 compute-0 podman[247306]: 2026-01-27 15:29:46.505941572 +0000 UTC m=+0.061368863 container remove 642f8120ad0d77bd2706fb111c6656a127e11f7928cf7a920c0bb926d0d3e3ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.510 185195 INFO nova.compute.manager [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Took 0.43 seconds to destroy the instance on the hypervisor.
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.511 185195 DEBUG oslo.service.loopingcall [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.512 185195 DEBUG nova.compute.manager [-] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.512 185195 DEBUG nova.network.neutron [-] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 15:29:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:46.515 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6adfddbf-d295-49f3-9db8-1fd76d3a7282]: (4, ('Tue Jan 27 03:29:46 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811 (642f8120ad0d77bd2706fb111c6656a127e11f7928cf7a920c0bb926d0d3e3ff)\n642f8120ad0d77bd2706fb111c6656a127e11f7928cf7a920c0bb926d0d3e3ff\nTue Jan 27 03:29:46 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811 (642f8120ad0d77bd2706fb111c6656a127e11f7928cf7a920c0bb926d0d3e3ff)\n642f8120ad0d77bd2706fb111c6656a127e11f7928cf7a920c0bb926d0d3e3ff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:29:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:46.518 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[44c450f3-02cd-43ad-9ade-4d8ca6d01312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:29:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:46.519 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7e37fe5-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:29:46 compute-0 kernel: tapd7e37fe5-60: left promiscuous mode
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.524 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.534 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:46.536 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[98fec879-a895-4400-a98c-830f863aa0e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:29:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:46.553 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e040f1db-2998-4f78-90e5-8c8940cf44c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:29:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:46.557 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1b523cd8-bc47-41ce-9eff-f97e67401019]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:29:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:46.575 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4086db7c-c36c-4de4-a674-2759cb5bf099]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 420452, 'reachable_time': 17761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247320, 'error': None, 'target': 'ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:29:46 compute-0 systemd[1]: run-netns-ovnmeta\x2dd7e37fe5\x2d6354\x2d4f61\x2d95d0\x2d78632be96811.mount: Deactivated successfully.
Jan 27 15:29:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:46.601 107308 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d7e37fe5-6354-4f61-95d0-78632be96811 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 15:29:46 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:29:46.603 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[16b9138c-0560-4192-9b56-43de1d62976b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:29:46 compute-0 nova_compute[185191]: 2026-01-27 15:29:46.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:29:47 compute-0 nova_compute[185191]: 2026-01-27 15:29:47.360 185195 DEBUG nova.network.neutron [-] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:29:47 compute-0 nova_compute[185191]: 2026-01-27 15:29:47.391 185195 INFO nova.compute.manager [-] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Took 0.88 seconds to deallocate network for instance.
Jan 27 15:29:47 compute-0 nova_compute[185191]: 2026-01-27 15:29:47.433 185195 DEBUG oslo_concurrency.lockutils [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:29:47 compute-0 nova_compute[185191]: 2026-01-27 15:29:47.434 185195 DEBUG oslo_concurrency.lockutils [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:29:47 compute-0 nova_compute[185191]: 2026-01-27 15:29:47.518 185195 DEBUG nova.compute.provider_tree [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:29:47 compute-0 nova_compute[185191]: 2026-01-27 15:29:47.532 185195 DEBUG nova.scheduler.client.report [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:29:47 compute-0 nova_compute[185191]: 2026-01-27 15:29:47.554 185195 DEBUG oslo_concurrency.lockutils [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:29:47 compute-0 nova_compute[185191]: 2026-01-27 15:29:47.580 185195 INFO nova.scheduler.client.report [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Deleted allocations for instance 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3
Jan 27 15:29:47 compute-0 nova_compute[185191]: 2026-01-27 15:29:47.645 185195 DEBUG oslo_concurrency.lockutils [None req-0033283a-6828-4cf4-b5b0-65f49c1ec080 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:29:48 compute-0 nova_compute[185191]: 2026-01-27 15:29:48.463 185195 DEBUG nova.compute.manager [req-e996bcc4-e0f0-4199-95a2-7de6ed164d25 req-3f7638a5-b3d4-4a1c-9a48-6e77eb52e304 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Received event network-vif-plugged-4c1725b6-637d-4572-927d-1137b3ba538c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:29:48 compute-0 nova_compute[185191]: 2026-01-27 15:29:48.465 185195 DEBUG oslo_concurrency.lockutils [req-e996bcc4-e0f0-4199-95a2-7de6ed164d25 req-3f7638a5-b3d4-4a1c-9a48-6e77eb52e304 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:29:48 compute-0 nova_compute[185191]: 2026-01-27 15:29:48.466 185195 DEBUG oslo_concurrency.lockutils [req-e996bcc4-e0f0-4199-95a2-7de6ed164d25 req-3f7638a5-b3d4-4a1c-9a48-6e77eb52e304 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:29:48 compute-0 nova_compute[185191]: 2026-01-27 15:29:48.466 185195 DEBUG oslo_concurrency.lockutils [req-e996bcc4-e0f0-4199-95a2-7de6ed164d25 req-3f7638a5-b3d4-4a1c-9a48-6e77eb52e304 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "8c4af6eb-340b-477f-83d2-11aa7ab0b9d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:29:48 compute-0 nova_compute[185191]: 2026-01-27 15:29:48.467 185195 DEBUG nova.compute.manager [req-e996bcc4-e0f0-4199-95a2-7de6ed164d25 req-3f7638a5-b3d4-4a1c-9a48-6e77eb52e304 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] No waiting events found dispatching network-vif-plugged-4c1725b6-637d-4572-927d-1137b3ba538c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:29:48 compute-0 nova_compute[185191]: 2026-01-27 15:29:48.467 185195 WARNING nova.compute.manager [req-e996bcc4-e0f0-4199-95a2-7de6ed164d25 req-3f7638a5-b3d4-4a1c-9a48-6e77eb52e304 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Received unexpected event network-vif-plugged-4c1725b6-637d-4572-927d-1137b3ba538c for instance with vm_state deleted and task_state None.
Jan 27 15:29:48 compute-0 nova_compute[185191]: 2026-01-27 15:29:48.468 185195 DEBUG nova.compute.manager [req-e996bcc4-e0f0-4199-95a2-7de6ed164d25 req-3f7638a5-b3d4-4a1c-9a48-6e77eb52e304 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Received event network-vif-deleted-4c1725b6-637d-4572-927d-1137b3ba538c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:29:50 compute-0 nova_compute[185191]: 2026-01-27 15:29:50.764 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:51 compute-0 nova_compute[185191]: 2026-01-27 15:29:51.427 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:55 compute-0 podman[247323]: 2026-01-27 15:29:55.35310612 +0000 UTC m=+0.102679428 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 15:29:55 compute-0 nova_compute[185191]: 2026-01-27 15:29:55.767 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:56 compute-0 nova_compute[185191]: 2026-01-27 15:29:56.430 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:29:57 compute-0 podman[247342]: 2026-01-27 15:29:57.32602069 +0000 UTC m=+0.078651636 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 27 15:29:57 compute-0 podman[247344]: 2026-01-27 15:29:57.338259417 +0000 UTC m=+0.080079334 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:29:57 compute-0 podman[247343]: 2026-01-27 15:29:57.388839121 +0000 UTC m=+0.136164315 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 27 15:29:59 compute-0 podman[201073]: time="2026-01-27T15:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:29:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:29:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3922 "" "Go-http-client/1.1"
Jan 27 15:30:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:30:00.247 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:30:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:30:00.248 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:30:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:30:00.248 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:30:00 compute-0 nova_compute[185191]: 2026-01-27 15:30:00.769 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:01 compute-0 nova_compute[185191]: 2026-01-27 15:30:01.390 185195 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769527786.3876898, 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:30:01 compute-0 nova_compute[185191]: 2026-01-27 15:30:01.390 185195 INFO nova.compute.manager [-] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] VM Stopped (Lifecycle Event)
Jan 27 15:30:01 compute-0 openstack_network_exporter[204239]: ERROR   15:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:30:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:30:01 compute-0 openstack_network_exporter[204239]: ERROR   15:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:30:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:30:01 compute-0 nova_compute[185191]: 2026-01-27 15:30:01.425 185195 DEBUG nova.compute.manager [None req-a532d703-40fe-414b-a496-2cf7cecbfe4a - - - - - -] [instance: 8c4af6eb-340b-477f-83d2-11aa7ab0b9d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:30:01 compute-0 nova_compute[185191]: 2026-01-27 15:30:01.433 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:05 compute-0 nova_compute[185191]: 2026-01-27 15:30:05.771 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:06 compute-0 podman[247402]: 2026-01-27 15:30:06.312039072 +0000 UTC m=+0.069701206 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 15:30:06 compute-0 nova_compute[185191]: 2026-01-27 15:30:06.435 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:09 compute-0 podman[247423]: 2026-01-27 15:30:09.305053982 +0000 UTC m=+0.059135934 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:30:09 compute-0 podman[247422]: 2026-01-27 15:30:09.310911109 +0000 UTC m=+0.069803789 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=kepler, version=9.4, build-date=2024-09-18T21:23:30, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., release=1214.1726694543, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Jan 27 15:30:10 compute-0 nova_compute[185191]: 2026-01-27 15:30:10.774 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:11 compute-0 nova_compute[185191]: 2026-01-27 15:30:11.438 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:14 compute-0 podman[247464]: 2026-01-27 15:30:14.306136991 +0000 UTC m=+0.062702349 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:30:15 compute-0 nova_compute[185191]: 2026-01-27 15:30:15.777 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:15 compute-0 sshd-session[247488]: Invalid user sol from 45.148.10.240 port 37112
Jan 27 15:30:16 compute-0 sshd-session[247488]: Connection closed by invalid user sol 45.148.10.240 port 37112 [preauth]
Jan 27 15:30:16 compute-0 nova_compute[185191]: 2026-01-27 15:30:16.442 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:17 compute-0 ovn_controller[97541]: 2026-01-27T15:30:17Z|00064|memory_trim|INFO|Detected inactivity (last active 30020 ms ago): trimming memory
Jan 27 15:30:20 compute-0 nova_compute[185191]: 2026-01-27 15:30:20.779 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:21 compute-0 nova_compute[185191]: 2026-01-27 15:30:21.445 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:22 compute-0 sshd-session[247491]: Invalid user sol from 2.57.122.238 port 49052
Jan 27 15:30:22 compute-0 sshd-session[247491]: Connection closed by invalid user sol 2.57.122.238 port 49052 [preauth]
Jan 27 15:30:25 compute-0 nova_compute[185191]: 2026-01-27 15:30:25.781 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:25 compute-0 nova_compute[185191]: 2026-01-27 15:30:25.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:30:26 compute-0 podman[247493]: 2026-01-27 15:30:26.306915845 +0000 UTC m=+0.064679472 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 27 15:30:26 compute-0 nova_compute[185191]: 2026-01-27 15:30:26.447 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:28 compute-0 podman[247511]: 2026-01-27 15:30:28.320128942 +0000 UTC m=+0.078291606 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true)
Jan 27 15:30:28 compute-0 podman[247513]: 2026-01-27 15:30:28.347031282 +0000 UTC m=+0.098007163 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, architecture=x86_64, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc.)
Jan 27 15:30:28 compute-0 podman[247512]: 2026-01-27 15:30:28.34769755 +0000 UTC m=+0.102577876 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:30:29 compute-0 podman[201073]: time="2026-01-27T15:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:30:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:30:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3923 "" "Go-http-client/1.1"
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.068 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.107 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.107 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.108 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.108 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.440 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.441 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5365MB free_disk=72.41659545898438GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.441 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.441 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.631 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.632 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.686 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing inventories for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.750 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating ProviderTree inventory for provider dbf037fd-3291-487b-ae9c-69178dae2ebc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.750 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.763 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing aggregate associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.783 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.787 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing trait associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, traits: HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_ACCELERATORS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.807 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.821 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.843 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.843 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.843 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:30:30 compute-0 nova_compute[185191]: 2026-01-27 15:30:30.844 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 15:30:31 compute-0 openstack_network_exporter[204239]: ERROR   15:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:30:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:30:31 compute-0 openstack_network_exporter[204239]: ERROR   15:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:30:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:30:31 compute-0 nova_compute[185191]: 2026-01-27 15:30:31.449 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:32 compute-0 nova_compute[185191]: 2026-01-27 15:30:32.962 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:30:32 compute-0 nova_compute[185191]: 2026-01-27 15:30:32.963 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 15:30:32 compute-0 nova_compute[185191]: 2026-01-27 15:30:32.985 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 15:30:35 compute-0 nova_compute[185191]: 2026-01-27 15:30:35.785 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:36 compute-0 nova_compute[185191]: 2026-01-27 15:30:36.451 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:37 compute-0 podman[247576]: 2026-01-27 15:30:37.365681202 +0000 UTC m=+0.112497912 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 15:30:37 compute-0 nova_compute[185191]: 2026-01-27 15:30:37.962 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:30:37 compute-0 nova_compute[185191]: 2026-01-27 15:30:37.963 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:30:38 compute-0 nova_compute[185191]: 2026-01-27 15:30:38.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:30:38 compute-0 nova_compute[185191]: 2026-01-27 15:30:38.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:30:40 compute-0 podman[247596]: 2026-01-27 15:30:40.311929549 +0000 UTC m=+0.066486631 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:30:40 compute-0 podman[247595]: 2026-01-27 15:30:40.332608842 +0000 UTC m=+0.088907490 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, managed_by=edpm_ansible, name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.expose-services=, architecture=x86_64, vcs-type=git)
Jan 27 15:30:40 compute-0 nova_compute[185191]: 2026-01-27 15:30:40.787 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:40 compute-0 nova_compute[185191]: 2026-01-27 15:30:40.938 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:30:41 compute-0 nova_compute[185191]: 2026-01-27 15:30:41.455 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:41 compute-0 nova_compute[185191]: 2026-01-27 15:30:41.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:30:41 compute-0 nova_compute[185191]: 2026-01-27 15:30:41.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:30:42 compute-0 nova_compute[185191]: 2026-01-27 15:30:42.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:30:42 compute-0 nova_compute[185191]: 2026-01-27 15:30:42.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:30:42 compute-0 nova_compute[185191]: 2026-01-27 15:30:42.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:30:42 compute-0 nova_compute[185191]: 2026-01-27 15:30:42.969 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:30:44 compute-0 podman[247641]: 2026-01-27 15:30:44.741416651 +0000 UTC m=+0.070155219 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:30:45 compute-0 nova_compute[185191]: 2026-01-27 15:30:45.770 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:30:45 compute-0 nova_compute[185191]: 2026-01-27 15:30:45.790 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:46 compute-0 nova_compute[185191]: 2026-01-27 15:30:46.458 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:46 compute-0 nova_compute[185191]: 2026-01-27 15:30:46.947 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:30:48 compute-0 nova_compute[185191]: 2026-01-27 15:30:48.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:30:50 compute-0 nova_compute[185191]: 2026-01-27 15:30:50.791 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:51 compute-0 nova_compute[185191]: 2026-01-27 15:30:51.461 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:55 compute-0 nova_compute[185191]: 2026-01-27 15:30:55.793 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:56 compute-0 nova_compute[185191]: 2026-01-27 15:30:56.463 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:30:57 compute-0 podman[247667]: 2026-01-27 15:30:57.310759821 +0000 UTC m=+0.067053845 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 15:30:59 compute-0 podman[247689]: 2026-01-27 15:30:59.308007952 +0000 UTC m=+0.063264064 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, distribution-scope=public, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible)
Jan 27 15:30:59 compute-0 podman[247688]: 2026-01-27 15:30:59.345503056 +0000 UTC m=+0.101944969 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:30:59 compute-0 podman[247687]: 2026-01-27 15:30:59.347466328 +0000 UTC m=+0.105873224 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute)
Jan 27 15:30:59 compute-0 podman[201073]: time="2026-01-27T15:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:30:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:30:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3920 "" "Go-http-client/1.1"
Jan 27 15:31:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:31:00.249 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:31:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:31:00.251 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:31:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:31:00.251 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:31:00 compute-0 nova_compute[185191]: 2026-01-27 15:31:00.795 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:01 compute-0 openstack_network_exporter[204239]: ERROR   15:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:31:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:31:01 compute-0 openstack_network_exporter[204239]: ERROR   15:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:31:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:31:01 compute-0 nova_compute[185191]: 2026-01-27 15:31:01.465 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:05 compute-0 nova_compute[185191]: 2026-01-27 15:31:05.797 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:06 compute-0 nova_compute[185191]: 2026-01-27 15:31:06.467 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:08 compute-0 podman[247745]: 2026-01-27 15:31:08.331141259 +0000 UTC m=+0.085820767 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 15:31:10 compute-0 nova_compute[185191]: 2026-01-27 15:31:10.799 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.990 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.990 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02da4c7b60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:10.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:31:11.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:31:11 compute-0 podman[247766]: 2026-01-27 15:31:11.300745442 +0000 UTC m=+0.055126256 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:31:11 compute-0 podman[247765]: 2026-01-27 15:31:11.329260765 +0000 UTC m=+0.087286187 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, container_name=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release-0.7.12=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, maintainer=Red Hat, Inc., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=)
Jan 27 15:31:11 compute-0 nova_compute[185191]: 2026-01-27 15:31:11.470 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:15 compute-0 podman[247809]: 2026-01-27 15:31:15.329998104 +0000 UTC m=+0.086230799 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:31:15 compute-0 nova_compute[185191]: 2026-01-27 15:31:15.741 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:31:15 compute-0 nova_compute[185191]: 2026-01-27 15:31:15.801 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:16 compute-0 nova_compute[185191]: 2026-01-27 15:31:16.476 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:20 compute-0 nova_compute[185191]: 2026-01-27 15:31:20.803 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:21 compute-0 nova_compute[185191]: 2026-01-27 15:31:21.480 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:25 compute-0 nova_compute[185191]: 2026-01-27 15:31:25.805 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:26 compute-0 nova_compute[185191]: 2026-01-27 15:31:26.482 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:28 compute-0 podman[247834]: 2026-01-27 15:31:28.295075385 +0000 UTC m=+0.056445622 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 15:31:29 compute-0 podman[201073]: time="2026-01-27T15:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:31:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:31:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3918 "" "Go-http-client/1.1"
Jan 27 15:31:29 compute-0 nova_compute[185191]: 2026-01-27 15:31:29.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:31:30 compute-0 nova_compute[185191]: 2026-01-27 15:31:30.312 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:31:30 compute-0 nova_compute[185191]: 2026-01-27 15:31:30.313 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:31:30 compute-0 nova_compute[185191]: 2026-01-27 15:31:30.313 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:31:30 compute-0 nova_compute[185191]: 2026-01-27 15:31:30.313 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:31:30 compute-0 podman[247853]: 2026-01-27 15:31:30.3227838 +0000 UTC m=+0.077545346 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 27 15:31:30 compute-0 podman[247854]: 2026-01-27 15:31:30.365249887 +0000 UTC m=+0.116174841 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 27 15:31:30 compute-0 podman[247855]: 2026-01-27 15:31:30.389544947 +0000 UTC m=+0.133764121 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=openstack_network_exporter, name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 15:31:30 compute-0 nova_compute[185191]: 2026-01-27 15:31:30.727 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:31:30 compute-0 nova_compute[185191]: 2026-01-27 15:31:30.728 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5346MB free_disk=72.41650390625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:31:30 compute-0 nova_compute[185191]: 2026-01-27 15:31:30.728 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:31:30 compute-0 nova_compute[185191]: 2026-01-27 15:31:30.728 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:31:30 compute-0 nova_compute[185191]: 2026-01-27 15:31:30.808 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:30 compute-0 nova_compute[185191]: 2026-01-27 15:31:30.821 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:31:30 compute-0 nova_compute[185191]: 2026-01-27 15:31:30.821 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:31:30 compute-0 nova_compute[185191]: 2026-01-27 15:31:30.853 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:31:30 compute-0 nova_compute[185191]: 2026-01-27 15:31:30.869 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:31:30 compute-0 nova_compute[185191]: 2026-01-27 15:31:30.870 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:31:30 compute-0 nova_compute[185191]: 2026-01-27 15:31:30.871 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:31:31 compute-0 openstack_network_exporter[204239]: ERROR   15:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:31:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:31:31 compute-0 openstack_network_exporter[204239]: ERROR   15:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:31:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:31:31 compute-0 nova_compute[185191]: 2026-01-27 15:31:31.484 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:35 compute-0 nova_compute[185191]: 2026-01-27 15:31:35.811 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:36 compute-0 nova_compute[185191]: 2026-01-27 15:31:36.487 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:38 compute-0 nova_compute[185191]: 2026-01-27 15:31:38.867 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:31:38 compute-0 nova_compute[185191]: 2026-01-27 15:31:38.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:31:38 compute-0 nova_compute[185191]: 2026-01-27 15:31:38.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:31:39 compute-0 podman[247916]: 2026-01-27 15:31:39.324703028 +0000 UTC m=+0.081004569 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi)
Jan 27 15:31:39 compute-0 nova_compute[185191]: 2026-01-27 15:31:39.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:31:40 compute-0 nova_compute[185191]: 2026-01-27 15:31:40.813 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:41 compute-0 nova_compute[185191]: 2026-01-27 15:31:41.489 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:41 compute-0 nova_compute[185191]: 2026-01-27 15:31:41.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:31:41 compute-0 nova_compute[185191]: 2026-01-27 15:31:41.943 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:31:42 compute-0 podman[247937]: 2026-01-27 15:31:42.333284884 +0000 UTC m=+0.075825580 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:31:42 compute-0 podman[247936]: 2026-01-27 15:31:42.378134044 +0000 UTC m=+0.112522462 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, name=ubi9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, release-0.7.12=, container_name=kepler, release=1214.1726694543, version=9.4, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container)
Jan 27 15:31:43 compute-0 nova_compute[185191]: 2026-01-27 15:31:43.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:31:43 compute-0 nova_compute[185191]: 2026-01-27 15:31:43.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:31:43 compute-0 nova_compute[185191]: 2026-01-27 15:31:43.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:31:43 compute-0 nova_compute[185191]: 2026-01-27 15:31:43.968 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:31:45 compute-0 nova_compute[185191]: 2026-01-27 15:31:45.815 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:46 compute-0 podman[247978]: 2026-01-27 15:31:46.319725827 +0000 UTC m=+0.073783286 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:31:46 compute-0 nova_compute[185191]: 2026-01-27 15:31:46.493 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:46 compute-0 nova_compute[185191]: 2026-01-27 15:31:46.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:31:50 compute-0 nova_compute[185191]: 2026-01-27 15:31:50.819 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:50 compute-0 nova_compute[185191]: 2026-01-27 15:31:50.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:31:51 compute-0 nova_compute[185191]: 2026-01-27 15:31:51.495 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:55 compute-0 nova_compute[185191]: 2026-01-27 15:31:55.822 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:56 compute-0 nova_compute[185191]: 2026-01-27 15:31:56.498 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:31:59 compute-0 podman[248002]: 2026-01-27 15:31:59.353513189 +0000 UTC m=+0.106551453 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 15:31:59 compute-0 podman[201073]: time="2026-01-27T15:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:31:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:31:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3921 "" "Go-http-client/1.1"
Jan 27 15:32:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:32:00.251 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:32:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:32:00.251 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:32:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:32:00.251 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:32:00 compute-0 nova_compute[185191]: 2026-01-27 15:32:00.824 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:01 compute-0 podman[248021]: 2026-01-27 15:32:01.357305325 +0000 UTC m=+0.106059670 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 15:32:01 compute-0 podman[248023]: 2026-01-27 15:32:01.363988603 +0000 UTC m=+0.106252924 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., version=9.6)
Jan 27 15:32:01 compute-0 podman[248022]: 2026-01-27 15:32:01.386451885 +0000 UTC m=+0.129813946 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:32:01 compute-0 openstack_network_exporter[204239]: ERROR   15:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:32:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:32:01 compute-0 openstack_network_exporter[204239]: ERROR   15:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:32:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:32:01 compute-0 nova_compute[185191]: 2026-01-27 15:32:01.500 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:05 compute-0 nova_compute[185191]: 2026-01-27 15:32:05.827 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:06 compute-0 nova_compute[185191]: 2026-01-27 15:32:06.503 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:10 compute-0 podman[248086]: 2026-01-27 15:32:10.359163584 +0000 UTC m=+0.107883898 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi)
Jan 27 15:32:10 compute-0 nova_compute[185191]: 2026-01-27 15:32:10.830 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:11 compute-0 nova_compute[185191]: 2026-01-27 15:32:11.506 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:13 compute-0 podman[248105]: 2026-01-27 15:32:13.302219516 +0000 UTC m=+0.057298564 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, release-0.7.12=, container_name=kepler, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_id=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Jan 27 15:32:13 compute-0 podman[248106]: 2026-01-27 15:32:13.312187563 +0000 UTC m=+0.063385097 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:32:15 compute-0 nova_compute[185191]: 2026-01-27 15:32:15.834 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:16 compute-0 nova_compute[185191]: 2026-01-27 15:32:16.510 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:17 compute-0 podman[248149]: 2026-01-27 15:32:17.312355056 +0000 UTC m=+0.064657561 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:32:20 compute-0 nova_compute[185191]: 2026-01-27 15:32:20.837 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:21 compute-0 nova_compute[185191]: 2026-01-27 15:32:21.512 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:25 compute-0 nova_compute[185191]: 2026-01-27 15:32:25.839 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:26 compute-0 nova_compute[185191]: 2026-01-27 15:32:26.515 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:29 compute-0 podman[201073]: time="2026-01-27T15:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:32:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:32:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3919 "" "Go-http-client/1.1"
Jan 27 15:32:29 compute-0 nova_compute[185191]: 2026-01-27 15:32:29.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:32:29 compute-0 nova_compute[185191]: 2026-01-27 15:32:29.974 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:32:29 compute-0 nova_compute[185191]: 2026-01-27 15:32:29.974 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:32:29 compute-0 nova_compute[185191]: 2026-01-27 15:32:29.974 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:32:29 compute-0 nova_compute[185191]: 2026-01-27 15:32:29.974 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:32:30 compute-0 nova_compute[185191]: 2026-01-27 15:32:30.262 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:32:30 compute-0 nova_compute[185191]: 2026-01-27 15:32:30.263 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5363MB free_disk=72.41659545898438GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:32:30 compute-0 nova_compute[185191]: 2026-01-27 15:32:30.263 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:32:30 compute-0 nova_compute[185191]: 2026-01-27 15:32:30.264 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:32:30 compute-0 podman[248176]: 2026-01-27 15:32:30.326228932 +0000 UTC m=+0.085738666 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 15:32:30 compute-0 nova_compute[185191]: 2026-01-27 15:32:30.333 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:32:30 compute-0 nova_compute[185191]: 2026-01-27 15:32:30.333 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:32:30 compute-0 nova_compute[185191]: 2026-01-27 15:32:30.353 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:32:30 compute-0 nova_compute[185191]: 2026-01-27 15:32:30.366 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:32:30 compute-0 nova_compute[185191]: 2026-01-27 15:32:30.367 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:32:30 compute-0 nova_compute[185191]: 2026-01-27 15:32:30.367 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:32:30 compute-0 nova_compute[185191]: 2026-01-27 15:32:30.842 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:31 compute-0 openstack_network_exporter[204239]: ERROR   15:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:32:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:32:31 compute-0 openstack_network_exporter[204239]: ERROR   15:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:32:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:32:31 compute-0 nova_compute[185191]: 2026-01-27 15:32:31.517 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:32 compute-0 podman[248197]: 2026-01-27 15:32:32.317463232 +0000 UTC m=+0.070680922 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7)
Jan 27 15:32:32 compute-0 podman[248195]: 2026-01-27 15:32:32.345367919 +0000 UTC m=+0.105087513 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126)
Jan 27 15:32:32 compute-0 podman[248196]: 2026-01-27 15:32:32.346712065 +0000 UTC m=+0.103101440 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 27 15:32:34 compute-0 sshd-session[248259]: Invalid user solana from 45.148.10.240 port 54878
Jan 27 15:32:34 compute-0 sshd-session[248259]: Connection closed by invalid user solana 45.148.10.240 port 54878 [preauth]
Jan 27 15:32:34 compute-0 sshd-session[248261]: Invalid user sol from 2.57.122.238 port 53162
Jan 27 15:32:34 compute-0 sshd-session[248261]: Connection closed by invalid user sol 2.57.122.238 port 53162 [preauth]
Jan 27 15:32:35 compute-0 nova_compute[185191]: 2026-01-27 15:32:35.844 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:36 compute-0 nova_compute[185191]: 2026-01-27 15:32:36.519 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:39 compute-0 nova_compute[185191]: 2026-01-27 15:32:39.362 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:32:39 compute-0 nova_compute[185191]: 2026-01-27 15:32:39.363 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:32:40 compute-0 nova_compute[185191]: 2026-01-27 15:32:40.846 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:40 compute-0 nova_compute[185191]: 2026-01-27 15:32:40.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:32:41 compute-0 podman[248263]: 2026-01-27 15:32:41.303497975 +0000 UTC m=+0.058955588 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:32:41 compute-0 nova_compute[185191]: 2026-01-27 15:32:41.522 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:41 compute-0 nova_compute[185191]: 2026-01-27 15:32:41.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:32:41 compute-0 nova_compute[185191]: 2026-01-27 15:32:41.956 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:32:42 compute-0 nova_compute[185191]: 2026-01-27 15:32:42.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:32:42 compute-0 nova_compute[185191]: 2026-01-27 15:32:42.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:32:44 compute-0 podman[248284]: 2026-01-27 15:32:44.304207225 +0000 UTC m=+0.056281466 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:32:44 compute-0 podman[248283]: 2026-01-27 15:32:44.305426788 +0000 UTC m=+0.064651891 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, distribution-scope=public, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, architecture=x86_64, vendor=Red Hat, Inc., container_name=kepler, managed_by=edpm_ansible)
Jan 27 15:32:45 compute-0 nova_compute[185191]: 2026-01-27 15:32:45.847 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:45 compute-0 nova_compute[185191]: 2026-01-27 15:32:45.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:32:45 compute-0 nova_compute[185191]: 2026-01-27 15:32:45.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:32:45 compute-0 nova_compute[185191]: 2026-01-27 15:32:45.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:32:45 compute-0 nova_compute[185191]: 2026-01-27 15:32:45.960 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:32:46 compute-0 nova_compute[185191]: 2026-01-27 15:32:46.525 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:47 compute-0 nova_compute[185191]: 2026-01-27 15:32:47.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:32:48 compute-0 podman[248323]: 2026-01-27 15:32:48.293381881 +0000 UTC m=+0.055304881 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:32:50 compute-0 nova_compute[185191]: 2026-01-27 15:32:50.849 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:51 compute-0 nova_compute[185191]: 2026-01-27 15:32:51.527 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:52 compute-0 nova_compute[185191]: 2026-01-27 15:32:52.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:32:55 compute-0 nova_compute[185191]: 2026-01-27 15:32:55.851 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:56 compute-0 nova_compute[185191]: 2026-01-27 15:32:56.530 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:32:59 compute-0 podman[201073]: time="2026-01-27T15:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:32:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:32:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3919 "" "Go-http-client/1.1"
Jan 27 15:33:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:33:00.253 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:33:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:33:00.253 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:33:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:33:00.253 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:33:00 compute-0 nova_compute[185191]: 2026-01-27 15:33:00.853 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:01 compute-0 podman[248347]: 2026-01-27 15:33:01.300010748 +0000 UTC m=+0.056726929 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 15:33:01 compute-0 openstack_network_exporter[204239]: ERROR   15:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:33:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:33:01 compute-0 openstack_network_exporter[204239]: ERROR   15:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:33:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:33:01 compute-0 nova_compute[185191]: 2026-01-27 15:33:01.533 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:03 compute-0 podman[248365]: 2026-01-27 15:33:03.316226908 +0000 UTC m=+0.069477740 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052)
Jan 27 15:33:03 compute-0 podman[248367]: 2026-01-27 15:33:03.345394938 +0000 UTC m=+0.090899582 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, distribution-scope=public, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:33:03 compute-0 podman[248366]: 2026-01-27 15:33:03.400280317 +0000 UTC m=+0.144848246 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 15:33:05 compute-0 nova_compute[185191]: 2026-01-27 15:33:05.857 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:06 compute-0 nova_compute[185191]: 2026-01-27 15:33:06.537 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:10 compute-0 nova_compute[185191]: 2026-01-27 15:33:10.859 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.990 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.991 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:10.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.003 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.003 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d8ea49e0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.003 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.004 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.004 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.007 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.008 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:33:11.008 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:33:11 compute-0 nova_compute[185191]: 2026-01-27 15:33:11.539 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:12 compute-0 podman[248431]: 2026-01-27 15:33:12.319512238 +0000 UTC m=+0.071686419 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:33:14 compute-0 podman[248450]: 2026-01-27 15:33:14.739454861 +0000 UTC m=+0.064582419 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:33:14 compute-0 podman[248449]: 2026-01-27 15:33:14.741707621 +0000 UTC m=+0.072075409 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 27 15:33:15 compute-0 nova_compute[185191]: 2026-01-27 15:33:15.861 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:16 compute-0 nova_compute[185191]: 2026-01-27 15:33:16.542 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:19 compute-0 podman[248488]: 2026-01-27 15:33:19.330437876 +0000 UTC m=+0.091081227 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:33:20 compute-0 nova_compute[185191]: 2026-01-27 15:33:20.866 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:21 compute-0 nova_compute[185191]: 2026-01-27 15:33:21.546 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:25 compute-0 nova_compute[185191]: 2026-01-27 15:33:25.868 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:26 compute-0 nova_compute[185191]: 2026-01-27 15:33:26.548 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:29 compute-0 podman[201073]: time="2026-01-27T15:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:33:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:33:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3919 "" "Go-http-client/1.1"
Jan 27 15:33:29 compute-0 nova_compute[185191]: 2026-01-27 15:33:29.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:33:30 compute-0 nova_compute[185191]: 2026-01-27 15:33:30.042 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:33:30 compute-0 nova_compute[185191]: 2026-01-27 15:33:30.042 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:33:30 compute-0 nova_compute[185191]: 2026-01-27 15:33:30.042 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:33:30 compute-0 nova_compute[185191]: 2026-01-27 15:33:30.043 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:33:30 compute-0 nova_compute[185191]: 2026-01-27 15:33:30.364 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:33:30 compute-0 nova_compute[185191]: 2026-01-27 15:33:30.365 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5365MB free_disk=72.41654586791992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:33:30 compute-0 nova_compute[185191]: 2026-01-27 15:33:30.365 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:33:30 compute-0 nova_compute[185191]: 2026-01-27 15:33:30.365 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:33:30 compute-0 nova_compute[185191]: 2026-01-27 15:33:30.541 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:33:30 compute-0 nova_compute[185191]: 2026-01-27 15:33:30.541 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:33:30 compute-0 nova_compute[185191]: 2026-01-27 15:33:30.573 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:33:30 compute-0 nova_compute[185191]: 2026-01-27 15:33:30.600 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:33:30 compute-0 nova_compute[185191]: 2026-01-27 15:33:30.602 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:33:30 compute-0 nova_compute[185191]: 2026-01-27 15:33:30.602 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:33:30 compute-0 nova_compute[185191]: 2026-01-27 15:33:30.870 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:31 compute-0 openstack_network_exporter[204239]: ERROR   15:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:33:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:33:31 compute-0 openstack_network_exporter[204239]: ERROR   15:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:33:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:33:31 compute-0 nova_compute[185191]: 2026-01-27 15:33:31.550 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:31 compute-0 podman[248513]: 2026-01-27 15:33:31.634204316 +0000 UTC m=+0.058481175 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 15:33:34 compute-0 podman[248531]: 2026-01-27 15:33:34.31398024 +0000 UTC m=+0.071814203 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, org.label-schema.build-date=20260126, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute)
Jan 27 15:33:34 compute-0 podman[248533]: 2026-01-27 15:33:34.345930944 +0000 UTC m=+0.082566440 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, version=9.6, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=openstack_network_exporter)
Jan 27 15:33:34 compute-0 podman[248532]: 2026-01-27 15:33:34.373628605 +0000 UTC m=+0.125885569 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 15:33:35 compute-0 nova_compute[185191]: 2026-01-27 15:33:35.872 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:36 compute-0 nova_compute[185191]: 2026-01-27 15:33:36.553 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:40 compute-0 nova_compute[185191]: 2026-01-27 15:33:40.602 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:33:40 compute-0 nova_compute[185191]: 2026-01-27 15:33:40.874 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:40 compute-0 nova_compute[185191]: 2026-01-27 15:33:40.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:33:41 compute-0 nova_compute[185191]: 2026-01-27 15:33:41.555 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:42 compute-0 nova_compute[185191]: 2026-01-27 15:33:42.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:33:42 compute-0 nova_compute[185191]: 2026-01-27 15:33:42.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:33:43 compute-0 podman[248595]: 2026-01-27 15:33:43.304078547 +0000 UTC m=+0.062276627 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi)
Jan 27 15:33:43 compute-0 nova_compute[185191]: 2026-01-27 15:33:43.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:33:43 compute-0 nova_compute[185191]: 2026-01-27 15:33:43.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:33:45 compute-0 podman[248617]: 2026-01-27 15:33:45.321381307 +0000 UTC m=+0.067933389 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:33:45 compute-0 podman[248616]: 2026-01-27 15:33:45.323938645 +0000 UTC m=+0.074756511 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vcs-type=git, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, name=ubi9, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Jan 27 15:33:45 compute-0 nova_compute[185191]: 2026-01-27 15:33:45.877 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:45 compute-0 nova_compute[185191]: 2026-01-27 15:33:45.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:33:45 compute-0 nova_compute[185191]: 2026-01-27 15:33:45.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:33:45 compute-0 nova_compute[185191]: 2026-01-27 15:33:45.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:33:45 compute-0 nova_compute[185191]: 2026-01-27 15:33:45.958 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:33:46 compute-0 nova_compute[185191]: 2026-01-27 15:33:46.557 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:48 compute-0 nova_compute[185191]: 2026-01-27 15:33:48.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:33:50 compute-0 podman[248659]: 2026-01-27 15:33:50.327004796 +0000 UTC m=+0.085883199 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:33:50 compute-0 nova_compute[185191]: 2026-01-27 15:33:50.880 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:51 compute-0 nova_compute[185191]: 2026-01-27 15:33:51.559 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:54 compute-0 nova_compute[185191]: 2026-01-27 15:33:54.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:33:55 compute-0 nova_compute[185191]: 2026-01-27 15:33:55.883 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:56 compute-0 nova_compute[185191]: 2026-01-27 15:33:56.562 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:33:59 compute-0 podman[201073]: time="2026-01-27T15:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:33:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:33:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3922 "" "Go-http-client/1.1"
Jan 27 15:34:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:34:00.253 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:34:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:34:00.254 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:34:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:34:00.254 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:34:00 compute-0 nova_compute[185191]: 2026-01-27 15:34:00.883 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:01 compute-0 openstack_network_exporter[204239]: ERROR   15:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:34:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:34:01 compute-0 openstack_network_exporter[204239]: ERROR   15:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:34:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:34:01 compute-0 nova_compute[185191]: 2026-01-27 15:34:01.564 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:02 compute-0 podman[248683]: 2026-01-27 15:34:02.326484045 +0000 UTC m=+0.085872468 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 15:34:05 compute-0 podman[248700]: 2026-01-27 15:34:05.320951569 +0000 UTC m=+0.073571169 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 15:34:05 compute-0 podman[248702]: 2026-01-27 15:34:05.337702427 +0000 UTC m=+0.076844557 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, name=ubi9-minimal, version=9.6)
Jan 27 15:34:05 compute-0 podman[248701]: 2026-01-27 15:34:05.344834828 +0000 UTC m=+0.091272793 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:34:05 compute-0 nova_compute[185191]: 2026-01-27 15:34:05.885 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:06 compute-0 nova_compute[185191]: 2026-01-27 15:34:06.567 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:10 compute-0 nova_compute[185191]: 2026-01-27 15:34:10.888 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:11 compute-0 nova_compute[185191]: 2026-01-27 15:34:11.570 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:14 compute-0 podman[248761]: 2026-01-27 15:34:14.340503545 +0000 UTC m=+0.097804147 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:34:15 compute-0 nova_compute[185191]: 2026-01-27 15:34:15.892 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:16 compute-0 podman[248779]: 2026-01-27 15:34:16.318335791 +0000 UTC m=+0.074835823 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_id=kepler, vcs-type=git, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, release-0.7.12=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9)
Jan 27 15:34:16 compute-0 podman[248780]: 2026-01-27 15:34:16.345817696 +0000 UTC m=+0.093463051 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:34:16 compute-0 nova_compute[185191]: 2026-01-27 15:34:16.373 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:16 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:34:16.372 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:34:16 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:34:16.374 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:34:16 compute-0 nova_compute[185191]: 2026-01-27 15:34:16.573 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:17 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:34:17.377 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:34:20 compute-0 nova_compute[185191]: 2026-01-27 15:34:20.892 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:21 compute-0 podman[248823]: 2026-01-27 15:34:21.299599888 +0000 UTC m=+0.058486236 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:34:21 compute-0 nova_compute[185191]: 2026-01-27 15:34:21.577 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:25 compute-0 nova_compute[185191]: 2026-01-27 15:34:25.896 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:26 compute-0 nova_compute[185191]: 2026-01-27 15:34:26.581 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:29 compute-0 podman[201073]: time="2026-01-27T15:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:34:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:34:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3916 "" "Go-http-client/1.1"
Jan 27 15:34:30 compute-0 nova_compute[185191]: 2026-01-27 15:34:30.899 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:31 compute-0 openstack_network_exporter[204239]: ERROR   15:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:34:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:34:31 compute-0 openstack_network_exporter[204239]: ERROR   15:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:34:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:34:31 compute-0 nova_compute[185191]: 2026-01-27 15:34:31.583 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:31 compute-0 nova_compute[185191]: 2026-01-27 15:34:31.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:34:31 compute-0 nova_compute[185191]: 2026-01-27 15:34:31.973 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:34:31 compute-0 nova_compute[185191]: 2026-01-27 15:34:31.973 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:34:31 compute-0 nova_compute[185191]: 2026-01-27 15:34:31.974 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:34:31 compute-0 nova_compute[185191]: 2026-01-27 15:34:31.974 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:34:32 compute-0 nova_compute[185191]: 2026-01-27 15:34:32.283 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:34:32 compute-0 nova_compute[185191]: 2026-01-27 15:34:32.284 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5379MB free_disk=72.41659545898438GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:34:32 compute-0 nova_compute[185191]: 2026-01-27 15:34:32.284 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:34:32 compute-0 nova_compute[185191]: 2026-01-27 15:34:32.284 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:34:32 compute-0 nova_compute[185191]: 2026-01-27 15:34:32.372 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:34:32 compute-0 nova_compute[185191]: 2026-01-27 15:34:32.373 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:34:32 compute-0 nova_compute[185191]: 2026-01-27 15:34:32.412 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:34:32 compute-0 nova_compute[185191]: 2026-01-27 15:34:32.427 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:34:32 compute-0 nova_compute[185191]: 2026-01-27 15:34:32.428 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:34:32 compute-0 nova_compute[185191]: 2026-01-27 15:34:32.429 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:34:33 compute-0 podman[248848]: 2026-01-27 15:34:33.297050184 +0000 UTC m=+0.056002189 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 15:34:35 compute-0 nova_compute[185191]: 2026-01-27 15:34:35.902 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:36 compute-0 podman[248868]: 2026-01-27 15:34:36.335003041 +0000 UTC m=+0.075792189 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41, release=1755695350)
Jan 27 15:34:36 compute-0 podman[248866]: 2026-01-27 15:34:36.348223984 +0000 UTC m=+0.100212012 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute)
Jan 27 15:34:36 compute-0 podman[248867]: 2026-01-27 15:34:36.358405017 +0000 UTC m=+0.105784121 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 15:34:36 compute-0 nova_compute[185191]: 2026-01-27 15:34:36.586 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:40 compute-0 nova_compute[185191]: 2026-01-27 15:34:40.430 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:34:40 compute-0 nova_compute[185191]: 2026-01-27 15:34:40.905 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:41 compute-0 sshd-session[248929]: Invalid user sol from 2.57.122.238 port 58798
Jan 27 15:34:41 compute-0 sshd-session[248929]: Connection closed by invalid user sol 2.57.122.238 port 58798 [preauth]
Jan 27 15:34:41 compute-0 nova_compute[185191]: 2026-01-27 15:34:41.589 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:42 compute-0 nova_compute[185191]: 2026-01-27 15:34:42.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:34:42 compute-0 nova_compute[185191]: 2026-01-27 15:34:42.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:34:43 compute-0 nova_compute[185191]: 2026-01-27 15:34:43.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:34:43 compute-0 nova_compute[185191]: 2026-01-27 15:34:43.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:34:43 compute-0 nova_compute[185191]: 2026-01-27 15:34:43.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:34:44 compute-0 podman[248931]: 2026-01-27 15:34:44.727421849 +0000 UTC m=+0.060586722 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 27 15:34:45 compute-0 nova_compute[185191]: 2026-01-27 15:34:45.908 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:45 compute-0 nova_compute[185191]: 2026-01-27 15:34:45.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:34:46 compute-0 ovn_controller[97541]: 2026-01-27T15:34:46Z|00065|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Jan 27 15:34:46 compute-0 nova_compute[185191]: 2026-01-27 15:34:46.591 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:47 compute-0 podman[248950]: 2026-01-27 15:34:47.305332396 +0000 UTC m=+0.059139153 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:34:47 compute-0 podman[248949]: 2026-01-27 15:34:47.315093127 +0000 UTC m=+0.071662828 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, com.redhat.component=ubi9-container, config_id=kepler, io.buildah.version=1.29.0, vcs-type=git, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, container_name=kepler)
Jan 27 15:34:47 compute-0 nova_compute[185191]: 2026-01-27 15:34:47.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:34:47 compute-0 nova_compute[185191]: 2026-01-27 15:34:47.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:34:47 compute-0 nova_compute[185191]: 2026-01-27 15:34:47.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:34:47 compute-0 nova_compute[185191]: 2026-01-27 15:34:47.965 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:34:49 compute-0 nova_compute[185191]: 2026-01-27 15:34:49.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:34:50 compute-0 nova_compute[185191]: 2026-01-27 15:34:50.909 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:51 compute-0 nova_compute[185191]: 2026-01-27 15:34:51.593 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:52 compute-0 podman[248988]: 2026-01-27 15:34:52.322258179 +0000 UTC m=+0.083336761 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:34:54 compute-0 sshd-session[249012]: Invalid user sol from 45.148.10.240 port 36448
Jan 27 15:34:54 compute-0 sshd-session[249012]: Connection closed by invalid user sol 45.148.10.240 port 36448 [preauth]
Jan 27 15:34:55 compute-0 nova_compute[185191]: 2026-01-27 15:34:55.913 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:56 compute-0 nova_compute[185191]: 2026-01-27 15:34:56.598 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:34:56 compute-0 nova_compute[185191]: 2026-01-27 15:34:56.948 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:34:59 compute-0 podman[201073]: time="2026-01-27T15:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:34:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:34:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3915 "" "Go-http-client/1.1"
Jan 27 15:35:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:00.255 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:00.255 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:00.255 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:00 compute-0 nova_compute[185191]: 2026-01-27 15:35:00.918 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:01 compute-0 openstack_network_exporter[204239]: ERROR   15:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:35:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:35:01 compute-0 openstack_network_exporter[204239]: ERROR   15:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:35:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:35:01 compute-0 nova_compute[185191]: 2026-01-27 15:35:01.602 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:04 compute-0 podman[249014]: 2026-01-27 15:35:04.316789526 +0000 UTC m=+0.077670939 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 15:35:05 compute-0 nova_compute[185191]: 2026-01-27 15:35:05.920 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:06 compute-0 nova_compute[185191]: 2026-01-27 15:35:06.271 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:06 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:06.271 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:35:06 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:06.273 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:35:06 compute-0 nova_compute[185191]: 2026-01-27 15:35:06.606 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:07 compute-0 podman[249033]: 2026-01-27 15:35:07.312923394 +0000 UTC m=+0.068351429 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 15:35:07 compute-0 podman[249035]: 2026-01-27 15:35:07.3254507 +0000 UTC m=+0.071859564 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vcs-type=git, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter)
Jan 27 15:35:07 compute-0 podman[249034]: 2026-01-27 15:35:07.35610864 +0000 UTC m=+0.106621354 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 15:35:10 compute-0 nova_compute[185191]: 2026-01-27 15:35:10.923 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.991 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.991 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:10.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.001 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:35:11.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:35:11 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:11.275 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:11 compute-0 nova_compute[185191]: 2026-01-27 15:35:11.608 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:12 compute-0 nova_compute[185191]: 2026-01-27 15:35:12.503 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:12 compute-0 nova_compute[185191]: 2026-01-27 15:35:12.526 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:15 compute-0 nova_compute[185191]: 2026-01-27 15:35:15.217 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:15 compute-0 podman[249093]: 2026-01-27 15:35:15.318062761 +0000 UTC m=+0.072230323 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 15:35:15 compute-0 nova_compute[185191]: 2026-01-27 15:35:15.925 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:16 compute-0 nova_compute[185191]: 2026-01-27 15:35:16.611 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:17 compute-0 nova_compute[185191]: 2026-01-27 15:35:17.280 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:17 compute-0 nova_compute[185191]: 2026-01-27 15:35:17.433 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:18 compute-0 podman[249114]: 2026-01-27 15:35:18.312243236 +0000 UTC m=+0.057143810 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:35:18 compute-0 podman[249113]: 2026-01-27 15:35:18.322444139 +0000 UTC m=+0.076595430 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, version=9.4, architecture=x86_64, config_id=kepler, name=ubi9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.29.0)
Jan 27 15:35:20 compute-0 nova_compute[185191]: 2026-01-27 15:35:20.928 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:21 compute-0 nova_compute[185191]: 2026-01-27 15:35:21.614 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:23 compute-0 podman[249157]: 2026-01-27 15:35:23.325417827 +0000 UTC m=+0.084459540 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:35:24 compute-0 nova_compute[185191]: 2026-01-27 15:35:24.923 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:25 compute-0 nova_compute[185191]: 2026-01-27 15:35:25.009 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:25 compute-0 nova_compute[185191]: 2026-01-27 15:35:25.568 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:25 compute-0 nova_compute[185191]: 2026-01-27 15:35:25.930 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:26 compute-0 nova_compute[185191]: 2026-01-27 15:35:26.616 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:26 compute-0 nova_compute[185191]: 2026-01-27 15:35:26.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:35:29 compute-0 podman[201073]: time="2026-01-27T15:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:35:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:35:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3915 "" "Go-http-client/1.1"
Jan 27 15:35:30 compute-0 nova_compute[185191]: 2026-01-27 15:35:30.555 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:30 compute-0 nova_compute[185191]: 2026-01-27 15:35:30.932 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:31 compute-0 openstack_network_exporter[204239]: ERROR   15:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:35:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:35:31 compute-0 openstack_network_exporter[204239]: ERROR   15:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:35:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:35:31 compute-0 nova_compute[185191]: 2026-01-27 15:35:31.619 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:31 compute-0 nova_compute[185191]: 2026-01-27 15:35:31.964 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:35:31 compute-0 nova_compute[185191]: 2026-01-27 15:35:31.965 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 15:35:32 compute-0 nova_compute[185191]: 2026-01-27 15:35:32.783 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:32 compute-0 nova_compute[185191]: 2026-01-27 15:35:32.961 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:35:33 compute-0 nova_compute[185191]: 2026-01-27 15:35:33.003 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:33 compute-0 nova_compute[185191]: 2026-01-27 15:35:33.003 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:33 compute-0 nova_compute[185191]: 2026-01-27 15:35:33.003 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:33 compute-0 nova_compute[185191]: 2026-01-27 15:35:33.004 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:35:33 compute-0 nova_compute[185191]: 2026-01-27 15:35:33.287 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:35:33 compute-0 nova_compute[185191]: 2026-01-27 15:35:33.288 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5380MB free_disk=72.41610336303711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:35:33 compute-0 nova_compute[185191]: 2026-01-27 15:35:33.288 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:33 compute-0 nova_compute[185191]: 2026-01-27 15:35:33.288 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:33 compute-0 nova_compute[185191]: 2026-01-27 15:35:33.530 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:35:33 compute-0 nova_compute[185191]: 2026-01-27 15:35:33.530 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:35:33 compute-0 nova_compute[185191]: 2026-01-27 15:35:33.758 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing inventories for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 15:35:33 compute-0 nova_compute[185191]: 2026-01-27 15:35:33.951 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating ProviderTree inventory for provider dbf037fd-3291-487b-ae9c-69178dae2ebc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 15:35:33 compute-0 nova_compute[185191]: 2026-01-27 15:35:33.952 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 15:35:33 compute-0 nova_compute[185191]: 2026-01-27 15:35:33.973 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing aggregate associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 15:35:34 compute-0 nova_compute[185191]: 2026-01-27 15:35:34.004 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing trait associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, traits: HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_ACCELERATORS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 15:35:34 compute-0 nova_compute[185191]: 2026-01-27 15:35:34.032 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:35:34 compute-0 nova_compute[185191]: 2026-01-27 15:35:34.080 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:35:34 compute-0 nova_compute[185191]: 2026-01-27 15:35:34.082 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:35:34 compute-0 nova_compute[185191]: 2026-01-27 15:35:34.082 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:35 compute-0 podman[249180]: 2026-01-27 15:35:35.331799303 +0000 UTC m=+0.091576350 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 27 15:35:35 compute-0 nova_compute[185191]: 2026-01-27 15:35:35.934 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:36 compute-0 nova_compute[185191]: 2026-01-27 15:35:36.621 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:38 compute-0 podman[249201]: 2026-01-27 15:35:38.324817428 +0000 UTC m=+0.065285508 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Jan 27 15:35:38 compute-0 podman[249199]: 2026-01-27 15:35:38.345743208 +0000 UTC m=+0.098393733 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 27 15:35:38 compute-0 podman[249200]: 2026-01-27 15:35:38.364240253 +0000 UTC m=+0.110432116 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.037 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Acquiring lock "b4f95e32-4dde-475f-bf71-8bd9391938a2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.037 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "b4f95e32-4dde-475f-bf71-8bd9391938a2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.084 185195 DEBUG nova.compute.manager [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.252 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.253 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.260 185195 DEBUG nova.virt.hardware [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.260 185195 INFO nova.compute.claims [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Claim successful on node compute-0.ctlplane.example.com
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.451 185195 DEBUG nova.compute.provider_tree [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.494 185195 DEBUG nova.scheduler.client.report [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.523 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.524 185195 DEBUG nova.compute.manager [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.590 185195 DEBUG nova.compute.manager [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.591 185195 DEBUG nova.network.neutron [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.609 185195 INFO nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.633 185195 DEBUG nova.compute.manager [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.738 185195 DEBUG nova.compute.manager [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.739 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.740 185195 INFO nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Creating image(s)
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.741 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Acquiring lock "/var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.741 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "/var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.742 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "/var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.743 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Acquiring lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.744 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.935 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.943 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 15:35:40 compute-0 nova_compute[185191]: 2026-01-27 15:35:40.959 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 15:35:41 compute-0 nova_compute[185191]: 2026-01-27 15:35:41.105 185195 DEBUG nova.policy [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '284e9a7227b6494189d43d1f5c7f629f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'de927906c1224ae18edd6fb91a4a7037', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 15:35:41 compute-0 nova_compute[185191]: 2026-01-27 15:35:41.624 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:41 compute-0 nova_compute[185191]: 2026-01-27 15:35:41.960 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:35:42 compute-0 nova_compute[185191]: 2026-01-27 15:35:42.283 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Acquiring lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:42 compute-0 nova_compute[185191]: 2026-01-27 15:35:42.283 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:42 compute-0 nova_compute[185191]: 2026-01-27 15:35:42.330 185195 DEBUG nova.compute.manager [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 15:35:42 compute-0 nova_compute[185191]: 2026-01-27 15:35:42.411 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:42 compute-0 nova_compute[185191]: 2026-01-27 15:35:42.411 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:42 compute-0 nova_compute[185191]: 2026-01-27 15:35:42.425 185195 DEBUG nova.virt.hardware [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 15:35:42 compute-0 nova_compute[185191]: 2026-01-27 15:35:42.425 185195 INFO nova.compute.claims [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Claim successful on node compute-0.ctlplane.example.com
Jan 27 15:35:42 compute-0 nova_compute[185191]: 2026-01-27 15:35:42.777 185195 DEBUG nova.compute.provider_tree [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:35:42 compute-0 nova_compute[185191]: 2026-01-27 15:35:42.800 185195 DEBUG nova.scheduler.client.report [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:35:42 compute-0 nova_compute[185191]: 2026-01-27 15:35:42.823 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:42 compute-0 nova_compute[185191]: 2026-01-27 15:35:42.823 185195 DEBUG nova.compute.manager [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 15:35:42 compute-0 nova_compute[185191]: 2026-01-27 15:35:42.868 185195 DEBUG nova.compute.manager [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 15:35:42 compute-0 nova_compute[185191]: 2026-01-27 15:35:42.868 185195 DEBUG nova.network.neutron [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 15:35:42 compute-0 nova_compute[185191]: 2026-01-27 15:35:42.890 185195 INFO nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 15:35:42 compute-0 nova_compute[185191]: 2026-01-27 15:35:42.915 185195 DEBUG nova.compute.manager [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.020 185195 DEBUG nova.compute.manager [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.021 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.022 185195 INFO nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Creating image(s)
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.023 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Acquiring lock "/var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.023 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "/var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.024 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "/var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.024 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Acquiring lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.280 185195 DEBUG nova.network.neutron [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Successfully created port: 33cb1013-4786-49f5-a482-721c6aeb907b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.403 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.465 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024.part --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.466 185195 DEBUG nova.virt.images [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.503 185195 DEBUG nova.privsep.utils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.504 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024.part /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.779 185195 DEBUG nova.policy [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '405a986111cf446a943b8d37c3022002', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9f2c5d84ff64d1da269b157e0956b5a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.812 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024.part /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024.converted" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.819 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.882 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024.converted --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.884 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.896 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.897 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.909 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.925 185195 DEBUG oslo_concurrency.processutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.966 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.967 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Acquiring lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.968 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.979 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.998 185195 DEBUG oslo_concurrency.processutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:43 compute-0 nova_compute[185191]: 2026-01-27 15:35:43.999 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Acquiring lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.038 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.039 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024,backing_fmt=raw /var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.222 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024,backing_fmt=raw /var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk 1073741824" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.223 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.224 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.238 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.251 185195 DEBUG oslo_concurrency.processutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.281 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.282 185195 DEBUG nova.virt.disk.api [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Checking if we can resize image /var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.282 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.310 185195 DEBUG oslo_concurrency.processutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.311 185195 DEBUG oslo_concurrency.processutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024,backing_fmt=raw /var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.343 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.344 185195 DEBUG nova.virt.disk.api [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Cannot resize image /var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.344 185195 DEBUG nova.objects.instance [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lazy-loading 'migration_context' on Instance uuid b4f95e32-4dde-475f-bf71-8bd9391938a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.368 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.369 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Ensure instance console log exists: /var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.369 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.370 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.370 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.432 185195 DEBUG oslo_concurrency.processutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024,backing_fmt=raw /var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8/disk 1073741824" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.433 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.434 185195 DEBUG oslo_concurrency.processutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.503 185195 DEBUG oslo_concurrency.processutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.504 185195 DEBUG nova.virt.disk.api [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Checking if we can resize image /var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.504 185195 DEBUG oslo_concurrency.processutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.565 185195 DEBUG oslo_concurrency.processutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.566 185195 DEBUG nova.virt.disk.api [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Cannot resize image /var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.567 185195 DEBUG nova.objects.instance [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lazy-loading 'migration_context' on Instance uuid eae5a95c-09c0-4c0b-ae8f-3ab2659972b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.585 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.586 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Ensure instance console log exists: /var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.587 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.587 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.588 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.902 185195 DEBUG nova.network.neutron [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Successfully created port: be6e3ca2-5630-4d59-904c-810951329397 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:35:44 compute-0 nova_compute[185191]: 2026-01-27 15:35:44.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:35:45 compute-0 nova_compute[185191]: 2026-01-27 15:35:45.937 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:45 compute-0 nova_compute[185191]: 2026-01-27 15:35:45.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:35:45 compute-0 nova_compute[185191]: 2026-01-27 15:35:45.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:35:46 compute-0 nova_compute[185191]: 2026-01-27 15:35:46.064 185195 DEBUG nova.network.neutron [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Successfully updated port: 33cb1013-4786-49f5-a482-721c6aeb907b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 15:35:46 compute-0 nova_compute[185191]: 2026-01-27 15:35:46.090 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Acquiring lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:35:46 compute-0 nova_compute[185191]: 2026-01-27 15:35:46.090 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Acquired lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:35:46 compute-0 nova_compute[185191]: 2026-01-27 15:35:46.090 185195 DEBUG nova.network.neutron [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:35:46 compute-0 podman[249306]: 2026-01-27 15:35:46.301685298 +0000 UTC m=+0.059291717 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 15:35:46 compute-0 nova_compute[185191]: 2026-01-27 15:35:46.360 185195 DEBUG nova.network.neutron [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 15:35:46 compute-0 nova_compute[185191]: 2026-01-27 15:35:46.626 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:46 compute-0 nova_compute[185191]: 2026-01-27 15:35:46.779 185195 DEBUG nova.network.neutron [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Successfully updated port: be6e3ca2-5630-4d59-904c-810951329397 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 15:35:46 compute-0 nova_compute[185191]: 2026-01-27 15:35:46.864 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Acquiring lock "refresh_cache-eae5a95c-09c0-4c0b-ae8f-3ab2659972b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:35:46 compute-0 nova_compute[185191]: 2026-01-27 15:35:46.865 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Acquired lock "refresh_cache-eae5a95c-09c0-4c0b-ae8f-3ab2659972b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:35:46 compute-0 nova_compute[185191]: 2026-01-27 15:35:46.865 185195 DEBUG nova.network.neutron [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.140 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Acquiring lock "6c1eac15-4acf-423d-817f-805a374bb405" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.141 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.165 185195 DEBUG nova.compute.manager [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.242 185195 DEBUG nova.network.neutron [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.247 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.248 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.259 185195 DEBUG nova.virt.hardware [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.259 185195 INFO nova.compute.claims [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Claim successful on node compute-0.ctlplane.example.com
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.491 185195 DEBUG nova.compute.provider_tree [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.513 185195 DEBUG nova.scheduler.client.report [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.535 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.536 185195 DEBUG nova.compute.manager [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.588 185195 DEBUG nova.compute.manager [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.588 185195 DEBUG nova.network.neutron [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.606 185195 INFO nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.634 185195 DEBUG nova.compute.manager [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.737 185195 DEBUG nova.compute.manager [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.738 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.739 185195 INFO nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Creating image(s)
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.739 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Acquiring lock "/var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.740 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "/var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.740 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "/var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.751 185195 DEBUG oslo_concurrency.processutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.809 185195 DEBUG oslo_concurrency.processutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.810 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Acquiring lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.811 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.822 185195 DEBUG oslo_concurrency.processutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.860 185195 DEBUG nova.compute.manager [req-a601e348-00da-4076-9711-2dddbbf7766e req-e075a1db-b214-40ea-8718-1dfc264886f7 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Received event network-changed-33cb1013-4786-49f5-a482-721c6aeb907b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.861 185195 DEBUG nova.compute.manager [req-a601e348-00da-4076-9711-2dddbbf7766e req-e075a1db-b214-40ea-8718-1dfc264886f7 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Refreshing instance network info cache due to event network-changed-33cb1013-4786-49f5-a482-721c6aeb907b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.861 185195 DEBUG oslo_concurrency.lockutils [req-a601e348-00da-4076-9711-2dddbbf7766e req-e075a1db-b214-40ea-8718-1dfc264886f7 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.884 185195 DEBUG oslo_concurrency.processutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.885 185195 DEBUG oslo_concurrency.processutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024,backing_fmt=raw /var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.949 185195 DEBUG nova.policy [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e4d1728be0c14934b0fb170d90f2cf80', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '872630f403b24cda8e3ab59acbe33b66', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.967 185195 DEBUG oslo_concurrency.processutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024,backing_fmt=raw /var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405/disk 1073741824" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.968 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.968 185195 DEBUG oslo_concurrency.processutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.986 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.987 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.987 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 15:35:47 compute-0 nova_compute[185191]: 2026-01-27 15:35:47.987 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.020 185195 DEBUG nova.compute.manager [req-fea1f248-1101-4cc8-9113-2a93df2a8aa7 req-5f25ac09-c1d6-4d2b-94ed-425f5d7c9637 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Received event network-changed-be6e3ca2-5630-4d59-904c-810951329397 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.021 185195 DEBUG nova.compute.manager [req-fea1f248-1101-4cc8-9113-2a93df2a8aa7 req-5f25ac09-c1d6-4d2b-94ed-425f5d7c9637 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Refreshing instance network info cache due to event network-changed-be6e3ca2-5630-4d59-904c-810951329397. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.022 185195 DEBUG oslo_concurrency.lockutils [req-fea1f248-1101-4cc8-9113-2a93df2a8aa7 req-5f25ac09-c1d6-4d2b-94ed-425f5d7c9637 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-eae5a95c-09c0-4c0b-ae8f-3ab2659972b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.024 185195 DEBUG oslo_concurrency.processutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.025 185195 DEBUG nova.virt.disk.api [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Checking if we can resize image /var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.025 185195 DEBUG oslo_concurrency.processutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.087 185195 DEBUG oslo_concurrency.processutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.089 185195 DEBUG nova.virt.disk.api [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Cannot resize image /var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.089 185195 DEBUG nova.objects.instance [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lazy-loading 'migration_context' on Instance uuid 6c1eac15-4acf-423d-817f-805a374bb405 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.283 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.284 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Ensure instance console log exists: /var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.285 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.285 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.286 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.385 185195 DEBUG nova.network.neutron [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Updating instance_info_cache with network_info: [{"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.413 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Releasing lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.415 185195 DEBUG nova.compute.manager [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Instance network_info: |[{"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.416 185195 DEBUG oslo_concurrency.lockutils [req-a601e348-00da-4076-9711-2dddbbf7766e req-e075a1db-b214-40ea-8718-1dfc264886f7 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.416 185195 DEBUG nova.network.neutron [req-a601e348-00da-4076-9711-2dddbbf7766e req-e075a1db-b214-40ea-8718-1dfc264886f7 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Refreshing network info cache for port 33cb1013-4786-49f5-a482-721c6aeb907b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.418 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Start _get_guest_xml network_info=[{"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:34:19Z,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:34:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': 'fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.426 185195 WARNING nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.431 185195 DEBUG nova.virt.libvirt.host [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.432 185195 DEBUG nova.virt.libvirt.host [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.439 185195 DEBUG nova.virt.libvirt.host [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.440 185195 DEBUG nova.virt.libvirt.host [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.441 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.441 185195 DEBUG nova.virt.hardware [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T15:34:18Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='aed09843-3292-40b2-b829-c4ed118e135f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:34:19Z,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:34:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.442 185195 DEBUG nova.virt.hardware [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.442 185195 DEBUG nova.virt.hardware [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.442 185195 DEBUG nova.virt.hardware [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.443 185195 DEBUG nova.virt.hardware [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.443 185195 DEBUG nova.virt.hardware [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.443 185195 DEBUG nova.virt.hardware [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.443 185195 DEBUG nova.virt.hardware [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.444 185195 DEBUG nova.virt.hardware [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.444 185195 DEBUG nova.virt.hardware [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.444 185195 DEBUG nova.virt.hardware [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.448 185195 DEBUG nova.virt.libvirt.vif [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-296491480',display_name='tempest-AttachInterfacesUnderV243Test-server-296491480',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-296491480',id=6,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMruRPVmIxyxJpLw1XteWxArIgcJ2nS0hQhNn3b2y9hdAlw+pR6sm2cPZ97Rely9ERzVsR/GKvqv4AG8086R3E12n5VkwDtAMg2Wmwzi0BPUMEmi7C5mquhLTMNiji6WQQ==',key_name='tempest-keypair-811725567',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de927906c1224ae18edd6fb91a4a7037',ramdisk_id='',reservation_id='r-8avql300',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1149926905',owner_user_name='tempest-AttachInterfacesUnderV243Test-1149926905-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:35:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='284e9a7227b6494189d43d1f5c7f629f',uuid=b4f95e32-4dde-475f-bf71-8bd9391938a2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.448 185195 DEBUG nova.network.os_vif_util [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Converting VIF {"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.449 185195 DEBUG nova.network.os_vif_util [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:55:96,bridge_name='br-int',has_traffic_filtering=True,id=33cb1013-4786-49f5-a482-721c6aeb907b,network=Network(dd9a5530-7d18-48b0-bbd7-21f4f3192fce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33cb1013-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.450 185195 DEBUG nova.objects.instance [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4f95e32-4dde-475f-bf71-8bd9391938a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.466 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] End _get_guest_xml xml=<domain type="kvm">
Jan 27 15:35:48 compute-0 nova_compute[185191]:   <uuid>b4f95e32-4dde-475f-bf71-8bd9391938a2</uuid>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   <name>instance-00000006</name>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   <memory>131072</memory>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   <vcpu>1</vcpu>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   <metadata>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-296491480</nova:name>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <nova:creationTime>2026-01-27 15:35:48</nova:creationTime>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <nova:flavor name="m1.nano">
Jan 27 15:35:48 compute-0 nova_compute[185191]:         <nova:memory>128</nova:memory>
Jan 27 15:35:48 compute-0 nova_compute[185191]:         <nova:disk>1</nova:disk>
Jan 27 15:35:48 compute-0 nova_compute[185191]:         <nova:swap>0</nova:swap>
Jan 27 15:35:48 compute-0 nova_compute[185191]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 15:35:48 compute-0 nova_compute[185191]:         <nova:vcpus>1</nova:vcpus>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       </nova:flavor>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <nova:owner>
Jan 27 15:35:48 compute-0 nova_compute[185191]:         <nova:user uuid="284e9a7227b6494189d43d1f5c7f629f">tempest-AttachInterfacesUnderV243Test-1149926905-project-member</nova:user>
Jan 27 15:35:48 compute-0 nova_compute[185191]:         <nova:project uuid="de927906c1224ae18edd6fb91a4a7037">tempest-AttachInterfacesUnderV243Test-1149926905</nova:project>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       </nova:owner>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <nova:root type="image" uuid="fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <nova:ports>
Jan 27 15:35:48 compute-0 nova_compute[185191]:         <nova:port uuid="33cb1013-4786-49f5-a482-721c6aeb907b">
Jan 27 15:35:48 compute-0 nova_compute[185191]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:         </nova:port>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       </nova:ports>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     </nova:instance>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   </metadata>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   <sysinfo type="smbios">
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <system>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <entry name="manufacturer">RDO</entry>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <entry name="product">OpenStack Compute</entry>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <entry name="serial">b4f95e32-4dde-475f-bf71-8bd9391938a2</entry>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <entry name="uuid">b4f95e32-4dde-475f-bf71-8bd9391938a2</entry>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <entry name="family">Virtual Machine</entry>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     </system>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   </sysinfo>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   <os>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <boot dev="hd"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <smbios mode="sysinfo"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   </os>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   <features>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <acpi/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <apic/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <vmcoreinfo/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   </features>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   <clock offset="utc">
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <timer name="hpet" present="no"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   </clock>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   <cpu mode="host-model" match="exact">
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <target dev="vda" bus="virtio"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <disk type="file" device="cdrom">
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.config"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <target dev="sda" bus="sata"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <interface type="ethernet">
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <mac address="fa:16:3e:c6:55:96"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <mtu size="1442"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <target dev="tap33cb1013-47"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <serial type="pty">
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <log file="/var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/console.log" append="off"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     </serial>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <video>
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     </video>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <input type="tablet" bus="usb"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <rng model="virtio">
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <backend model="random">/dev/urandom</backend>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <controller type="usb" index="0"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     <memballoon model="virtio">
Jan 27 15:35:48 compute-0 nova_compute[185191]:       <stats period="10"/>
Jan 27 15:35:48 compute-0 nova_compute[185191]:     </memballoon>
Jan 27 15:35:48 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:35:48 compute-0 nova_compute[185191]: </domain>
Jan 27 15:35:48 compute-0 nova_compute[185191]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.467 185195 DEBUG nova.compute.manager [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Preparing to wait for external event network-vif-plugged-33cb1013-4786-49f5-a482-721c6aeb907b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.467 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Acquiring lock "b4f95e32-4dde-475f-bf71-8bd9391938a2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.468 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "b4f95e32-4dde-475f-bf71-8bd9391938a2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.468 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "b4f95e32-4dde-475f-bf71-8bd9391938a2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.469 185195 DEBUG nova.virt.libvirt.vif [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-296491480',display_name='tempest-AttachInterfacesUnderV243Test-server-296491480',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-296491480',id=6,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMruRPVmIxyxJpLw1XteWxArIgcJ2nS0hQhNn3b2y9hdAlw+pR6sm2cPZ97Rely9ERzVsR/GKvqv4AG8086R3E12n5VkwDtAMg2Wmwzi0BPUMEmi7C5mquhLTMNiji6WQQ==',key_name='tempest-keypair-811725567',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de927906c1224ae18edd6fb91a4a7037',ramdisk_id='',reservation_id='r-8avql300',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1149926905',owner_user_name='tempest-AttachInterfacesUnderV243Test-1149926905-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:35:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='284e9a7227b6494189d43d1f5c7f629f',uuid=b4f95e32-4dde-475f-bf71-8bd9391938a2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.469 185195 DEBUG nova.network.os_vif_util [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Converting VIF {"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.470 185195 DEBUG nova.network.os_vif_util [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:55:96,bridge_name='br-int',has_traffic_filtering=True,id=33cb1013-4786-49f5-a482-721c6aeb907b,network=Network(dd9a5530-7d18-48b0-bbd7-21f4f3192fce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33cb1013-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.470 185195 DEBUG os_vif [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:55:96,bridge_name='br-int',has_traffic_filtering=True,id=33cb1013-4786-49f5-a482-721c6aeb907b,network=Network(dd9a5530-7d18-48b0-bbd7-21f4f3192fce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33cb1013-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.471 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.472 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.472 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.475 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.475 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33cb1013-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.476 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap33cb1013-47, col_values=(('external_ids', {'iface-id': '33cb1013-4786-49f5-a482-721c6aeb907b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:55:96', 'vm-uuid': 'b4f95e32-4dde-475f-bf71-8bd9391938a2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:48 compute-0 NetworkManager[56090]: <info>  [1769528148.4783] manager: (tap33cb1013-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.480 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.485 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.486 185195 INFO os_vif [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:55:96,bridge_name='br-int',has_traffic_filtering=True,id=33cb1013-4786-49f5-a482-721c6aeb907b,network=Network(dd9a5530-7d18-48b0-bbd7-21f4f3192fce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33cb1013-47')
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.578 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.578 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.579 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] No VIF found with MAC fa:16:3e:c6:55:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 15:35:48 compute-0 nova_compute[185191]: 2026-01-27 15:35:48.581 185195 INFO nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Using config drive
Jan 27 15:35:49 compute-0 podman[249342]: 2026-01-27 15:35:49.321678716 +0000 UTC m=+0.074045513 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 27 15:35:49 compute-0 podman[249343]: 2026-01-27 15:35:49.342893983 +0000 UTC m=+0.093158993 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.351 185195 INFO nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Creating config drive at /var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.config
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.358 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwa_wipfd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.384 185195 DEBUG nova.network.neutron [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Updating instance_info_cache with network_info: [{"id": "be6e3ca2-5630-4d59-904c-810951329397", "address": "fa:16:3e:2a:9e:58", "network": {"id": "fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348", "bridge": "br-int", "label": "tempest-ServersTestJSON-679938800-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9f2c5d84ff64d1da269b157e0956b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6e3ca2-56", "ovs_interfaceid": "be6e3ca2-5630-4d59-904c-810951329397", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.449 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Releasing lock "refresh_cache-eae5a95c-09c0-4c0b-ae8f-3ab2659972b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.449 185195 DEBUG nova.compute.manager [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Instance network_info: |[{"id": "be6e3ca2-5630-4d59-904c-810951329397", "address": "fa:16:3e:2a:9e:58", "network": {"id": "fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348", "bridge": "br-int", "label": "tempest-ServersTestJSON-679938800-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9f2c5d84ff64d1da269b157e0956b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6e3ca2-56", "ovs_interfaceid": "be6e3ca2-5630-4d59-904c-810951329397", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.450 185195 DEBUG oslo_concurrency.lockutils [req-fea1f248-1101-4cc8-9113-2a93df2a8aa7 req-5f25ac09-c1d6-4d2b-94ed-425f5d7c9637 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-eae5a95c-09c0-4c0b-ae8f-3ab2659972b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.450 185195 DEBUG nova.network.neutron [req-fea1f248-1101-4cc8-9113-2a93df2a8aa7 req-5f25ac09-c1d6-4d2b-94ed-425f5d7c9637 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Refreshing network info cache for port be6e3ca2-5630-4d59-904c-810951329397 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.453 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Start _get_guest_xml network_info=[{"id": "be6e3ca2-5630-4d59-904c-810951329397", "address": "fa:16:3e:2a:9e:58", "network": {"id": "fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348", "bridge": "br-int", "label": "tempest-ServersTestJSON-679938800-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9f2c5d84ff64d1da269b157e0956b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6e3ca2-56", "ovs_interfaceid": "be6e3ca2-5630-4d59-904c-810951329397", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:34:19Z,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:34:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': 'fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.460 185195 WARNING nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.470 185195 DEBUG nova.virt.libvirt.host [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.471 185195 DEBUG nova.virt.libvirt.host [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.478 185195 DEBUG nova.virt.libvirt.host [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.478 185195 DEBUG nova.virt.libvirt.host [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.479 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.479 185195 DEBUG nova.virt.hardware [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T15:34:18Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='aed09843-3292-40b2-b829-c4ed118e135f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:34:19Z,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:34:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.480 185195 DEBUG nova.virt.hardware [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.480 185195 DEBUG nova.virt.hardware [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.480 185195 DEBUG nova.virt.hardware [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.481 185195 DEBUG nova.virt.hardware [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.481 185195 DEBUG nova.virt.hardware [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.482 185195 DEBUG nova.virt.hardware [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.482 185195 DEBUG nova.virt.hardware [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.482 185195 DEBUG nova.virt.hardware [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.483 185195 DEBUG nova.virt.hardware [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.483 185195 DEBUG nova.virt.hardware [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.487 185195 DEBUG nova.virt.libvirt.vif [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:35:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-74227258',display_name='tempest-ServersTestJSON-server-74227258',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-74227258',id=7,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEOHGu+sKZvpneTa1cl6PsMBo1htCB/ReJvXR4yYIPfN3V7p8Dge6SHSk5NbvWpfuCZ5QNNfus6hdCAeuxco1wUwmLDMTJh0we5kyHGikIwmpz1xRz0qGF9R9HJmnBRA1Q==',key_name='tempest-keypair-1427022892',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9f2c5d84ff64d1da269b157e0956b5a',ramdisk_id='',reservation_id='r-zu48u0rc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1062714811',owner_user_name='tempest-ServersTestJSON-1062714811-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:35:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='405a986111cf446a943b8d37c3022002',uuid=eae5a95c-09c0-4c0b-ae8f-3ab2659972b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be6e3ca2-5630-4d59-904c-810951329397", "address": "fa:16:3e:2a:9e:58", "network": {"id": "fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348", "bridge": "br-int", "label": "tempest-ServersTestJSON-679938800-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9f2c5d84ff64d1da269b157e0956b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6e3ca2-56", "ovs_interfaceid": "be6e3ca2-5630-4d59-904c-810951329397", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.487 185195 DEBUG nova.network.os_vif_util [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Converting VIF {"id": "be6e3ca2-5630-4d59-904c-810951329397", "address": "fa:16:3e:2a:9e:58", "network": {"id": "fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348", "bridge": "br-int", "label": "tempest-ServersTestJSON-679938800-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9f2c5d84ff64d1da269b157e0956b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6e3ca2-56", "ovs_interfaceid": "be6e3ca2-5630-4d59-904c-810951329397", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.488 185195 DEBUG nova.network.os_vif_util [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:9e:58,bridge_name='br-int',has_traffic_filtering=True,id=be6e3ca2-5630-4d59-904c-810951329397,network=Network(fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe6e3ca2-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.489 185195 DEBUG nova.objects.instance [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lazy-loading 'pci_devices' on Instance uuid eae5a95c-09c0-4c0b-ae8f-3ab2659972b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.491 185195 DEBUG oslo_concurrency.processutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwa_wipfd" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.519 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] End _get_guest_xml xml=<domain type="kvm">
Jan 27 15:35:49 compute-0 nova_compute[185191]:   <uuid>eae5a95c-09c0-4c0b-ae8f-3ab2659972b8</uuid>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   <name>instance-00000007</name>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   <memory>131072</memory>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   <vcpu>1</vcpu>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   <metadata>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <nova:name>tempest-ServersTestJSON-server-74227258</nova:name>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <nova:creationTime>2026-01-27 15:35:49</nova:creationTime>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <nova:flavor name="m1.nano">
Jan 27 15:35:49 compute-0 nova_compute[185191]:         <nova:memory>128</nova:memory>
Jan 27 15:35:49 compute-0 nova_compute[185191]:         <nova:disk>1</nova:disk>
Jan 27 15:35:49 compute-0 nova_compute[185191]:         <nova:swap>0</nova:swap>
Jan 27 15:35:49 compute-0 nova_compute[185191]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 15:35:49 compute-0 nova_compute[185191]:         <nova:vcpus>1</nova:vcpus>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       </nova:flavor>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <nova:owner>
Jan 27 15:35:49 compute-0 nova_compute[185191]:         <nova:user uuid="405a986111cf446a943b8d37c3022002">tempest-ServersTestJSON-1062714811-project-member</nova:user>
Jan 27 15:35:49 compute-0 nova_compute[185191]:         <nova:project uuid="b9f2c5d84ff64d1da269b157e0956b5a">tempest-ServersTestJSON-1062714811</nova:project>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       </nova:owner>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <nova:root type="image" uuid="fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <nova:ports>
Jan 27 15:35:49 compute-0 nova_compute[185191]:         <nova:port uuid="be6e3ca2-5630-4d59-904c-810951329397">
Jan 27 15:35:49 compute-0 nova_compute[185191]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:         </nova:port>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       </nova:ports>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     </nova:instance>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   </metadata>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   <sysinfo type="smbios">
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <system>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <entry name="manufacturer">RDO</entry>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <entry name="product">OpenStack Compute</entry>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <entry name="serial">eae5a95c-09c0-4c0b-ae8f-3ab2659972b8</entry>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <entry name="uuid">eae5a95c-09c0-4c0b-ae8f-3ab2659972b8</entry>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <entry name="family">Virtual Machine</entry>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     </system>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   </sysinfo>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   <os>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <boot dev="hd"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <smbios mode="sysinfo"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   </os>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   <features>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <acpi/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <apic/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <vmcoreinfo/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   </features>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   <clock offset="utc">
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <timer name="hpet" present="no"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   </clock>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   <cpu mode="host-model" match="exact">
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8/disk"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <target dev="vda" bus="virtio"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <disk type="file" device="cdrom">
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8/disk.config"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <target dev="sda" bus="sata"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <interface type="ethernet">
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <mac address="fa:16:3e:2a:9e:58"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <mtu size="1442"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <target dev="tapbe6e3ca2-56"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <serial type="pty">
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <log file="/var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8/console.log" append="off"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     </serial>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <video>
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     </video>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <input type="tablet" bus="usb"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <rng model="virtio">
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <backend model="random">/dev/urandom</backend>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <controller type="usb" index="0"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     <memballoon model="virtio">
Jan 27 15:35:49 compute-0 nova_compute[185191]:       <stats period="10"/>
Jan 27 15:35:49 compute-0 nova_compute[185191]:     </memballoon>
Jan 27 15:35:49 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:35:49 compute-0 nova_compute[185191]: </domain>
Jan 27 15:35:49 compute-0 nova_compute[185191]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.520 185195 DEBUG nova.compute.manager [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Preparing to wait for external event network-vif-plugged-be6e3ca2-5630-4d59-904c-810951329397 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.521 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Acquiring lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.521 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.521 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.522 185195 DEBUG nova.virt.libvirt.vif [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:35:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-74227258',display_name='tempest-ServersTestJSON-server-74227258',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-74227258',id=7,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEOHGu+sKZvpneTa1cl6PsMBo1htCB/ReJvXR4yYIPfN3V7p8Dge6SHSk5NbvWpfuCZ5QNNfus6hdCAeuxco1wUwmLDMTJh0we5kyHGikIwmpz1xRz0qGF9R9HJmnBRA1Q==',key_name='tempest-keypair-1427022892',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9f2c5d84ff64d1da269b157e0956b5a',ramdisk_id='',reservation_id='r-zu48u0rc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1062714811',owner_user_name='tempest-ServersTestJSON-1062714811-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:35:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='405a986111cf446a943b8d37c3022002',uuid=eae5a95c-09c0-4c0b-ae8f-3ab2659972b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be6e3ca2-5630-4d59-904c-810951329397", "address": "fa:16:3e:2a:9e:58", "network": {"id": "fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348", "bridge": "br-int", "label": "tempest-ServersTestJSON-679938800-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9f2c5d84ff64d1da269b157e0956b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6e3ca2-56", "ovs_interfaceid": "be6e3ca2-5630-4d59-904c-810951329397", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.523 185195 DEBUG nova.network.os_vif_util [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Converting VIF {"id": "be6e3ca2-5630-4d59-904c-810951329397", "address": "fa:16:3e:2a:9e:58", "network": {"id": "fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348", "bridge": "br-int", "label": "tempest-ServersTestJSON-679938800-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9f2c5d84ff64d1da269b157e0956b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6e3ca2-56", "ovs_interfaceid": "be6e3ca2-5630-4d59-904c-810951329397", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.523 185195 DEBUG nova.network.os_vif_util [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:9e:58,bridge_name='br-int',has_traffic_filtering=True,id=be6e3ca2-5630-4d59-904c-810951329397,network=Network(fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe6e3ca2-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.524 185195 DEBUG os_vif [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:9e:58,bridge_name='br-int',has_traffic_filtering=True,id=be6e3ca2-5630-4d59-904c-810951329397,network=Network(fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe6e3ca2-56') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.524 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.525 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.526 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.529 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.529 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe6e3ca2-56, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.530 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe6e3ca2-56, col_values=(('external_ids', {'iface-id': 'be6e3ca2-5630-4d59-904c-810951329397', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:9e:58', 'vm-uuid': 'eae5a95c-09c0-4c0b-ae8f-3ab2659972b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:49 compute-0 NetworkManager[56090]: <info>  [1769528149.5333] manager: (tapbe6e3ca2-56): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.532 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.535 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.543 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.545 185195 INFO os_vif [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:9e:58,bridge_name='br-int',has_traffic_filtering=True,id=be6e3ca2-5630-4d59-904c-810951329397,network=Network(fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe6e3ca2-56')
Jan 27 15:35:49 compute-0 kernel: tap33cb1013-47: entered promiscuous mode
Jan 27 15:35:49 compute-0 NetworkManager[56090]: <info>  [1769528149.5658] manager: (tap33cb1013-47): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Jan 27 15:35:49 compute-0 ovn_controller[97541]: 2026-01-27T15:35:49Z|00066|binding|INFO|Claiming lport 33cb1013-4786-49f5-a482-721c6aeb907b for this chassis.
Jan 27 15:35:49 compute-0 ovn_controller[97541]: 2026-01-27T15:35:49Z|00067|binding|INFO|33cb1013-4786-49f5-a482-721c6aeb907b: Claiming fa:16:3e:c6:55:96 10.100.0.6
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.569 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:49 compute-0 ovn_controller[97541]: 2026-01-27T15:35:49Z|00068|binding|INFO|Setting lport 33cb1013-4786-49f5-a482-721c6aeb907b ovn-installed in OVS
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.590 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.595 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:49 compute-0 systemd-machined[156506]: New machine qemu-6-instance-00000006.
Jan 27 15:35:49 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Jan 27 15:35:49 compute-0 systemd-udevd[249408]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:35:49 compute-0 ovn_controller[97541]: 2026-01-27T15:35:49Z|00069|binding|INFO|Setting lport 33cb1013-4786-49f5-a482-721c6aeb907b up in Southbound
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.635 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:55:96 10.100.0.6'], port_security=['fa:16:3e:c6:55:96 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b4f95e32-4dde-475f-bf71-8bd9391938a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd9a5530-7d18-48b0-bbd7-21f4f3192fce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de927906c1224ae18edd6fb91a4a7037', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63f3558f-ca7e-495f-bdf5-2d3d1950848a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64c139da-9754-4fed-b000-e06e325bc6ec, chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=33cb1013-4786-49f5-a482-721c6aeb907b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.636 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 33cb1013-4786-49f5-a482-721c6aeb907b in datapath dd9a5530-7d18-48b0-bbd7-21f4f3192fce bound to our chassis
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.638 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dd9a5530-7d18-48b0-bbd7-21f4f3192fce
Jan 27 15:35:49 compute-0 NetworkManager[56090]: <info>  [1769528149.6462] device (tap33cb1013-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 15:35:49 compute-0 NetworkManager[56090]: <info>  [1769528149.6468] device (tap33cb1013-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.649 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[288892c8-bf1c-40b8-9339-565fa10cc76f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.650 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdd9a5530-71 in ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.652 238613 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdd9a5530-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.652 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[104473cf-5802-487e-84b7-9a86715a4b3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.653 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4230cd-d826-4689-84bb-c581d1faf732]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.666 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[be9f341d-2a09-40ae-ac1b-6296e6a8980b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.682 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.682 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.682 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] No VIF found with MAC fa:16:3e:2a:9e:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.683 185195 INFO nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Using config drive
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.691 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6868c61c-40b9-4823-9a2a-43ab74f1e0de]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.721 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[107832b7-159b-42fa-940b-0d321ae43aae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.727 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4a7772e3-1d0b-493f-86e2-a1375de77cac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:49 compute-0 NetworkManager[56090]: <info>  [1769528149.7286] manager: (tapdd9a5530-70): new Veth device (/org/freedesktop/NetworkManager/Devices/36)
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.761 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[74222430-48fe-4cb0-bb66-041e3b35ead7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.765 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff0cc1e-8130-44b1-8a5f-014e6355da46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:49 compute-0 NetworkManager[56090]: <info>  [1769528149.7873] device (tapdd9a5530-70): carrier: link connected
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.792 185195 DEBUG nova.network.neutron [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Successfully created port: 11135ab8-7999-42aa-8036-2c6b47a82768 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.792 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[eee2dd6a-26f5-4b39-8111-756c8e3ce1af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.811 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6132a904-5c99-4343-860f-433a586a739c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd9a5530-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:51:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572551, 'reachable_time': 41377, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249444, 'error': None, 'target': 'ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.829 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b7f831-86c2-4dfd-a31f-7452a1cf81a1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0c:51c6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572551, 'tstamp': 572551}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249445, 'error': None, 'target': 'ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.845 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e713d207-8d4b-4d78-85b7-09db76cdda3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd9a5530-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:51:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572551, 'reachable_time': 41377, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249446, 'error': None, 'target': 'ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.890 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2bdd9e67-8e8e-4f63-93f2-b476002ce35e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.967 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c178fd4a-8b13-43cf-b608-cd452e22ff82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.969 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd9a5530-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.969 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.970 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd9a5530-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:49 compute-0 kernel: tapdd9a5530-70: entered promiscuous mode
Jan 27 15:35:49 compute-0 NetworkManager[56090]: <info>  [1769528149.9737] manager: (tapdd9a5530-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.972 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.978 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:49 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:49.980 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdd9a5530-70, col_values=(('external_ids', {'iface-id': '09357bac-861f-495f-9fcb-374ff41c059c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:49 compute-0 nova_compute[185191]: 2026-01-27 15:35:49.981 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:49 compute-0 ovn_controller[97541]: 2026-01-27T15:35:49Z|00070|binding|INFO|Releasing lport 09357bac-861f-495f-9fcb-374ff41c059c from this chassis (sb_readonly=0)
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.000 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.002 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:50.003 106793 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dd9a5530-7d18-48b0-bbd7-21f4f3192fce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dd9a5530-7d18-48b0-bbd7-21f4f3192fce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:50.004 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4f2ba0-ed6e-44c8-9dc0-39b5f50d0c35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:50.005 106793 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: global
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     log         /dev/log local0 debug
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     log-tag     haproxy-metadata-proxy-dd9a5530-7d18-48b0-bbd7-21f4f3192fce
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     user        root
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     group       root
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     maxconn     1024
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     pidfile     /var/lib/neutron/external/pids/dd9a5530-7d18-48b0-bbd7-21f4f3192fce.pid.haproxy
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     daemon
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: defaults
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     log global
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     mode http
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     option httplog
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     option dontlognull
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     option http-server-close
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     option forwardfor
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     retries                 3
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     timeout http-request    30s
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     timeout connect         30s
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     timeout client          32s
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     timeout server          32s
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     timeout http-keep-alive 30s
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: listen listener
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     bind 169.254.169.254:80
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:     http-request add-header X-OVN-Network-ID dd9a5530-7d18-48b0-bbd7-21f4f3192fce
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:50.006 106793 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce', 'env', 'PROCESS_TAG=haproxy-dd9a5530-7d18-48b0-bbd7-21f4f3192fce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dd9a5530-7d18-48b0-bbd7-21f4f3192fce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.072 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528150.0722861, b4f95e32-4dde-475f-bf71-8bd9391938a2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.073 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] VM Started (Lifecycle Event)
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.101 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.114 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528150.0723965, b4f95e32-4dde-475f-bf71-8bd9391938a2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.115 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] VM Paused (Lifecycle Event)
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.145 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.151 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.238 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:35:50 compute-0 podman[249489]: 2026-01-27 15:35:50.395275958 +0000 UTC m=+0.037252268 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 15:35:50 compute-0 podman[249489]: 2026-01-27 15:35:50.52806484 +0000 UTC m=+0.170041140 container create a48deca99d55c8c44c6cdbfe94399b366a94463fa9f3738e05894a9191091ea4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 15:35:50 compute-0 systemd[1]: Started libpod-conmon-a48deca99d55c8c44c6cdbfe94399b366a94463fa9f3738e05894a9191091ea4.scope.
Jan 27 15:35:50 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142169cacc7ae401695b79e524a198c75931d193588ec653c56a9fb187ee11d9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.684 185195 INFO nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Creating config drive at /var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8/disk.config
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.692 185195 DEBUG oslo_concurrency.processutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsgwyobpi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:50 compute-0 podman[249489]: 2026-01-27 15:35:50.701392727 +0000 UTC m=+0.343369067 container init a48deca99d55c8c44c6cdbfe94399b366a94463fa9f3738e05894a9191091ea4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:35:50 compute-0 podman[249489]: 2026-01-27 15:35:50.709374291 +0000 UTC m=+0.351350581 container start a48deca99d55c8c44c6cdbfe94399b366a94463fa9f3738e05894a9191091ea4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 15:35:50 compute-0 neutron-haproxy-ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce[249504]: [NOTICE]   (249509) : New worker (249513) forked
Jan 27 15:35:50 compute-0 neutron-haproxy-ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce[249504]: [NOTICE]   (249509) : Loading success.
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.817 185195 DEBUG oslo_concurrency.processutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsgwyobpi" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:50 compute-0 kernel: tapbe6e3ca2-56: entered promiscuous mode
Jan 27 15:35:50 compute-0 NetworkManager[56090]: <info>  [1769528150.8869] manager: (tapbe6e3ca2-56): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Jan 27 15:35:50 compute-0 systemd-udevd[249435]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.889 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:50 compute-0 ovn_controller[97541]: 2026-01-27T15:35:50Z|00071|binding|INFO|Claiming lport be6e3ca2-5630-4d59-904c-810951329397 for this chassis.
Jan 27 15:35:50 compute-0 ovn_controller[97541]: 2026-01-27T15:35:50Z|00072|binding|INFO|be6e3ca2-5630-4d59-904c-810951329397: Claiming fa:16:3e:2a:9e:58 10.100.0.9
Jan 27 15:35:50 compute-0 NetworkManager[56090]: <info>  [1769528150.9017] device (tapbe6e3ca2-56): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 15:35:50 compute-0 NetworkManager[56090]: <info>  [1769528150.9024] device (tapbe6e3ca2-56): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.904 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:50 compute-0 ovn_controller[97541]: 2026-01-27T15:35:50Z|00073|binding|INFO|Setting lport be6e3ca2-5630-4d59-904c-810951329397 ovn-installed in OVS
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.910 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:50 compute-0 systemd-machined[156506]: New machine qemu-7-instance-00000007.
Jan 27 15:35:50 compute-0 nova_compute[185191]: 2026-01-27 15:35:50.939 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:50.938 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:9e:58 10.100.0.9'], port_security=['fa:16:3e:2a:9e:58 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'eae5a95c-09c0-4c0b-ae8f-3ab2659972b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9f2c5d84ff64d1da269b157e0956b5a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6e5d012e-5545-41ed-9611-56cb4722c00b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a45e99c1-8db4-4465-9c01-3a3163fc599d, chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=be6e3ca2-5630-4d59-904c-810951329397) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:35:50 compute-0 ovn_controller[97541]: 2026-01-27T15:35:50Z|00074|binding|INFO|Setting lport be6e3ca2-5630-4d59-904c-810951329397 up in Southbound
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:50.941 106793 INFO neutron.agent.ovn.metadata.agent [-] Port be6e3ca2-5630-4d59-904c-810951329397 in datapath fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348 bound to our chassis
Jan 27 15:35:50 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:50.943 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:50.953 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ee65c7a9-1947-42e0-b5d7-ec31eda6c520]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:50.954 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfd9b0a9e-61 in ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:50.955 238613 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfd9b0a9e-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:50.955 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1b20797b-57d2-43f9-8378-c412f8a5e881]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:50.956 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0511c8b4-e55e-41ee-9187-2e832946d36a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:50.968 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[556a6876-4bc6-4a2f-9ef2-df6a53409add]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:50.982 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1e45460a-168d-4739-927b-041690375893]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.012 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[ce5ffc0d-07ed-4aae-a3ac-eb0b2547ef30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:51 compute-0 NetworkManager[56090]: <info>  [1769528151.0193] manager: (tapfd9b0a9e-60): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.019 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0e7605-f177-496b-ab78-766b8eb91e6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.052 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[723571fb-10a1-4b93-b4d7-82f8db523756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.062 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[57fe85a0-cc6a-4cfd-a625-78b1ee6ab0a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:51 compute-0 NetworkManager[56090]: <info>  [1769528151.0884] device (tapfd9b0a9e-60): carrier: link connected
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.095 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[bb1911e7-18d4-480e-824e-4792d6843809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.114 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe69740-d584-499c-941d-87864a8de920]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd9b0a9e-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:40:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572681, 'reachable_time': 19002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249554, 'error': None, 'target': 'ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.132 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2007d052-27d3-4bd6-81ad-80ae5359c19e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:4043'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572681, 'tstamp': 572681}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249556, 'error': None, 'target': 'ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.150 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[aa95528d-727a-4eac-a791-c3720c13f0b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd9b0a9e-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:40:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572681, 'reachable_time': 19002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249557, 'error': None, 'target': 'ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.179 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fcaede41-a92f-4e20-b8a9-39b14845d778]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.234 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[06a045ea-35c0-4be9-88f6-ac3d098c3b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.236 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd9b0a9e-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.236 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.237 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd9b0a9e-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:51 compute-0 NetworkManager[56090]: <info>  [1769528151.2400] manager: (tapfd9b0a9e-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 27 15:35:51 compute-0 kernel: tapfd9b0a9e-60: entered promiscuous mode
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.242 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.248 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfd9b0a9e-60, col_values=(('external_ids', {'iface-id': 'b1652441-1adf-4d7f-af7d-66c93a79a206'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:51 compute-0 ovn_controller[97541]: 2026-01-27T15:35:51Z|00075|binding|INFO|Releasing lport b1652441-1adf-4d7f-af7d-66c93a79a206 from this chassis (sb_readonly=0)
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.251 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.255 106793 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.256 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f505f5-d48e-4dfd-b3b5-731be83f637b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.257 106793 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: global
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     log         /dev/log local0 debug
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     log-tag     haproxy-metadata-proxy-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     user        root
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     group       root
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     maxconn     1024
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     pidfile     /var/lib/neutron/external/pids/fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348.pid.haproxy
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     daemon
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: defaults
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     log global
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     mode http
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     option httplog
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     option dontlognull
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     option http-server-close
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     option forwardfor
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     retries                 3
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     timeout http-request    30s
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     timeout connect         30s
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     timeout client          32s
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     timeout server          32s
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     timeout http-keep-alive 30s
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: listen listener
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     bind 169.254.169.254:80
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:     http-request add-header X-OVN-Network-ID fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 15:35:51 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:51.257 106793 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348', 'env', 'PROCESS_TAG=haproxy-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.275 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.292 185195 DEBUG nova.compute.manager [req-8e8d7201-a6d2-421c-970a-446d5176740d req-ef8845fb-7efe-46bb-bef9-0c723bb486cb 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Received event network-vif-plugged-33cb1013-4786-49f5-a482-721c6aeb907b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.293 185195 DEBUG oslo_concurrency.lockutils [req-8e8d7201-a6d2-421c-970a-446d5176740d req-ef8845fb-7efe-46bb-bef9-0c723bb486cb 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "b4f95e32-4dde-475f-bf71-8bd9391938a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.293 185195 DEBUG oslo_concurrency.lockutils [req-8e8d7201-a6d2-421c-970a-446d5176740d req-ef8845fb-7efe-46bb-bef9-0c723bb486cb 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "b4f95e32-4dde-475f-bf71-8bd9391938a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.294 185195 DEBUG oslo_concurrency.lockutils [req-8e8d7201-a6d2-421c-970a-446d5176740d req-ef8845fb-7efe-46bb-bef9-0c723bb486cb 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "b4f95e32-4dde-475f-bf71-8bd9391938a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.294 185195 DEBUG nova.compute.manager [req-8e8d7201-a6d2-421c-970a-446d5176740d req-ef8845fb-7efe-46bb-bef9-0c723bb486cb 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Processing event network-vif-plugged-33cb1013-4786-49f5-a482-721c6aeb907b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.295 185195 DEBUG nova.compute.manager [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.300 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528151.3000064, b4f95e32-4dde-475f-bf71-8bd9391938a2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.301 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] VM Resumed (Lifecycle Event)
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.311 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.317 185195 INFO nova.virt.libvirt.driver [-] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Instance spawned successfully.
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.318 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.325 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.332 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.344 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.344 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.345 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.346 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.346 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.347 185195 DEBUG nova.virt.libvirt.driver [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.359 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.443 185195 INFO nova.compute.manager [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Took 10.70 seconds to spawn the instance on the hypervisor.
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.444 185195 DEBUG nova.compute.manager [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.494 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528151.493797, eae5a95c-09c0-4c0b-ae8f-3ab2659972b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.494 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] VM Started (Lifecycle Event)
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.526 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.537 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528151.4938495, eae5a95c-09c0-4c0b-ae8f-3ab2659972b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.537 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] VM Paused (Lifecycle Event)
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.548 185195 INFO nova.compute.manager [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Took 11.34 seconds to build instance.
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.560 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.567 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.587 185195 DEBUG oslo_concurrency.lockutils [None req-da1c634d-9898-4a3e-9cfb-af0c192410b9 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "b4f95e32-4dde-475f-bf71-8bd9391938a2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.592 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.648 185195 DEBUG nova.network.neutron [req-a601e348-00da-4076-9711-2dddbbf7766e req-e075a1db-b214-40ea-8718-1dfc264886f7 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Updated VIF entry in instance network info cache for port 33cb1013-4786-49f5-a482-721c6aeb907b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.649 185195 DEBUG nova.network.neutron [req-a601e348-00da-4076-9711-2dddbbf7766e req-e075a1db-b214-40ea-8718-1dfc264886f7 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Updating instance_info_cache with network_info: [{"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:35:51 compute-0 podman[249594]: 2026-01-27 15:35:51.668968684 +0000 UTC m=+0.029408098 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.803 185195 DEBUG oslo_concurrency.lockutils [req-a601e348-00da-4076-9711-2dddbbf7766e req-e075a1db-b214-40ea-8718-1dfc264886f7 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:35:51 compute-0 podman[249594]: 2026-01-27 15:35:51.912751226 +0000 UTC m=+0.273190610 container create 2ea84a294f55b89329778c5e268dcd785165dfe4a3fc06a5e21be1426dc1d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:35:51 compute-0 nova_compute[185191]: 2026-01-27 15:35:51.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:35:52 compute-0 systemd[1]: Started libpod-conmon-2ea84a294f55b89329778c5e268dcd785165dfe4a3fc06a5e21be1426dc1d3b7.scope.
Jan 27 15:35:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b36270f17ec0aafa1349cdcb3f35c9883242e689f1423333a0d319e96d9ac4ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.132 185195 DEBUG nova.compute.manager [req-ea8416fa-c5fc-4f9e-805d-57b5943e27a6 req-a915fd2a-da43-41fb-83fe-5a6721309a6b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Received event network-vif-plugged-be6e3ca2-5630-4d59-904c-810951329397 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.133 185195 DEBUG oslo_concurrency.lockutils [req-ea8416fa-c5fc-4f9e-805d-57b5943e27a6 req-a915fd2a-da43-41fb-83fe-5a6721309a6b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.134 185195 DEBUG oslo_concurrency.lockutils [req-ea8416fa-c5fc-4f9e-805d-57b5943e27a6 req-a915fd2a-da43-41fb-83fe-5a6721309a6b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.135 185195 DEBUG oslo_concurrency.lockutils [req-ea8416fa-c5fc-4f9e-805d-57b5943e27a6 req-a915fd2a-da43-41fb-83fe-5a6721309a6b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.135 185195 DEBUG nova.compute.manager [req-ea8416fa-c5fc-4f9e-805d-57b5943e27a6 req-a915fd2a-da43-41fb-83fe-5a6721309a6b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Processing event network-vif-plugged-be6e3ca2-5630-4d59-904c-810951329397 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.138 185195 DEBUG nova.compute.manager [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 15:35:52 compute-0 podman[249594]: 2026-01-27 15:35:52.140813787 +0000 UTC m=+0.501253201 container init 2ea84a294f55b89329778c5e268dcd785165dfe4a3fc06a5e21be1426dc1d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 15:35:52 compute-0 podman[249594]: 2026-01-27 15:35:52.1521367 +0000 UTC m=+0.512576084 container start 2ea84a294f55b89329778c5e268dcd785165dfe4a3fc06a5e21be1426dc1d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.154 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528152.1531103, eae5a95c-09c0-4c0b-ae8f-3ab2659972b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.155 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] VM Resumed (Lifecycle Event)
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.171 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.178 185195 INFO nova.virt.libvirt.driver [-] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Instance spawned successfully.
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.180 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 15:35:52 compute-0 neutron-haproxy-ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348[249609]: [NOTICE]   (249613) : New worker (249615) forked
Jan 27 15:35:52 compute-0 neutron-haproxy-ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348[249609]: [NOTICE]   (249613) : Loading success.
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.204 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.214 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.222 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.224 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.224 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.225 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.226 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.227 185195 DEBUG nova.virt.libvirt.driver [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.240 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.307 185195 INFO nova.compute.manager [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Took 9.29 seconds to spawn the instance on the hypervisor.
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.308 185195 DEBUG nova.compute.manager [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.381 185195 INFO nova.compute.manager [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Took 9.99 seconds to build instance.
Jan 27 15:35:52 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.404 185195 DEBUG oslo_concurrency.lockutils [None req-7e2c897d-47a7-4f01-b117-2649db73e54d 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:52 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.824 185195 DEBUG nova.network.neutron [req-fea1f248-1101-4cc8-9113-2a93df2a8aa7 req-5f25ac09-c1d6-4d2b-94ed-425f5d7c9637 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Updated VIF entry in instance network info cache for port be6e3ca2-5630-4d59-904c-810951329397. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:35:52 compute-0 nova_compute[185191]: 2026-01-27 15:35:52.826 185195 DEBUG nova.network.neutron [req-fea1f248-1101-4cc8-9113-2a93df2a8aa7 req-5f25ac09-c1d6-4d2b-94ed-425f5d7c9637 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Updating instance_info_cache with network_info: [{"id": "be6e3ca2-5630-4d59-904c-810951329397", "address": "fa:16:3e:2a:9e:58", "network": {"id": "fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348", "bridge": "br-int", "label": "tempest-ServersTestJSON-679938800-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9f2c5d84ff64d1da269b157e0956b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6e3ca2-56", "ovs_interfaceid": "be6e3ca2-5630-4d59-904c-810951329397", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:35:53 compute-0 nova_compute[185191]: 2026-01-27 15:35:53.027 185195 DEBUG oslo_concurrency.lockutils [req-fea1f248-1101-4cc8-9113-2a93df2a8aa7 req-5f25ac09-c1d6-4d2b-94ed-425f5d7c9637 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-eae5a95c-09c0-4c0b-ae8f-3ab2659972b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:35:53 compute-0 nova_compute[185191]: 2026-01-27 15:35:53.613 185195 DEBUG nova.network.neutron [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Successfully updated port: 11135ab8-7999-42aa-8036-2c6b47a82768 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 15:35:53 compute-0 nova_compute[185191]: 2026-01-27 15:35:53.811 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Acquiring lock "refresh_cache-6c1eac15-4acf-423d-817f-805a374bb405" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:35:53 compute-0 nova_compute[185191]: 2026-01-27 15:35:53.812 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Acquired lock "refresh_cache-6c1eac15-4acf-423d-817f-805a374bb405" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:35:53 compute-0 nova_compute[185191]: 2026-01-27 15:35:53.813 185195 DEBUG nova.network.neutron [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:35:53 compute-0 nova_compute[185191]: 2026-01-27 15:35:53.899 185195 DEBUG nova.compute.manager [req-e53415a0-219d-4ae8-aede-56940cb87a92 req-dbf351b3-6295-44e4-84e2-5a87c573a473 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Received event network-vif-plugged-33cb1013-4786-49f5-a482-721c6aeb907b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:35:53 compute-0 nova_compute[185191]: 2026-01-27 15:35:53.900 185195 DEBUG oslo_concurrency.lockutils [req-e53415a0-219d-4ae8-aede-56940cb87a92 req-dbf351b3-6295-44e4-84e2-5a87c573a473 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "b4f95e32-4dde-475f-bf71-8bd9391938a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:53 compute-0 nova_compute[185191]: 2026-01-27 15:35:53.901 185195 DEBUG oslo_concurrency.lockutils [req-e53415a0-219d-4ae8-aede-56940cb87a92 req-dbf351b3-6295-44e4-84e2-5a87c573a473 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "b4f95e32-4dde-475f-bf71-8bd9391938a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:53 compute-0 nova_compute[185191]: 2026-01-27 15:35:53.901 185195 DEBUG oslo_concurrency.lockutils [req-e53415a0-219d-4ae8-aede-56940cb87a92 req-dbf351b3-6295-44e4-84e2-5a87c573a473 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "b4f95e32-4dde-475f-bf71-8bd9391938a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:53 compute-0 nova_compute[185191]: 2026-01-27 15:35:53.902 185195 DEBUG nova.compute.manager [req-e53415a0-219d-4ae8-aede-56940cb87a92 req-dbf351b3-6295-44e4-84e2-5a87c573a473 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] No waiting events found dispatching network-vif-plugged-33cb1013-4786-49f5-a482-721c6aeb907b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:35:53 compute-0 nova_compute[185191]: 2026-01-27 15:35:53.902 185195 WARNING nova.compute.manager [req-e53415a0-219d-4ae8-aede-56940cb87a92 req-dbf351b3-6295-44e4-84e2-5a87c573a473 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Received unexpected event network-vif-plugged-33cb1013-4786-49f5-a482-721c6aeb907b for instance with vm_state active and task_state None.
Jan 27 15:35:54 compute-0 nova_compute[185191]: 2026-01-27 15:35:54.117 185195 DEBUG nova.network.neutron [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 15:35:54 compute-0 nova_compute[185191]: 2026-01-27 15:35:54.298 185195 DEBUG nova.compute.manager [req-b28936e9-cbf0-47ef-ba37-698b53354a43 req-d0cba1f7-1727-44cb-9b2a-ec74b039c5f2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Received event network-vif-plugged-be6e3ca2-5630-4d59-904c-810951329397 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:35:54 compute-0 nova_compute[185191]: 2026-01-27 15:35:54.299 185195 DEBUG oslo_concurrency.lockutils [req-b28936e9-cbf0-47ef-ba37-698b53354a43 req-d0cba1f7-1727-44cb-9b2a-ec74b039c5f2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:54 compute-0 nova_compute[185191]: 2026-01-27 15:35:54.299 185195 DEBUG oslo_concurrency.lockutils [req-b28936e9-cbf0-47ef-ba37-698b53354a43 req-d0cba1f7-1727-44cb-9b2a-ec74b039c5f2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:54 compute-0 nova_compute[185191]: 2026-01-27 15:35:54.299 185195 DEBUG oslo_concurrency.lockutils [req-b28936e9-cbf0-47ef-ba37-698b53354a43 req-d0cba1f7-1727-44cb-9b2a-ec74b039c5f2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:54 compute-0 nova_compute[185191]: 2026-01-27 15:35:54.299 185195 DEBUG nova.compute.manager [req-b28936e9-cbf0-47ef-ba37-698b53354a43 req-d0cba1f7-1727-44cb-9b2a-ec74b039c5f2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] No waiting events found dispatching network-vif-plugged-be6e3ca2-5630-4d59-904c-810951329397 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:35:54 compute-0 nova_compute[185191]: 2026-01-27 15:35:54.300 185195 WARNING nova.compute.manager [req-b28936e9-cbf0-47ef-ba37-698b53354a43 req-d0cba1f7-1727-44cb-9b2a-ec74b039c5f2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Received unexpected event network-vif-plugged-be6e3ca2-5630-4d59-904c-810951329397 for instance with vm_state active and task_state None.
Jan 27 15:35:54 compute-0 nova_compute[185191]: 2026-01-27 15:35:54.300 185195 DEBUG nova.compute.manager [req-b28936e9-cbf0-47ef-ba37-698b53354a43 req-d0cba1f7-1727-44cb-9b2a-ec74b039c5f2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Received event network-changed-11135ab8-7999-42aa-8036-2c6b47a82768 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:35:54 compute-0 nova_compute[185191]: 2026-01-27 15:35:54.300 185195 DEBUG nova.compute.manager [req-b28936e9-cbf0-47ef-ba37-698b53354a43 req-d0cba1f7-1727-44cb-9b2a-ec74b039c5f2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Refreshing instance network info cache due to event network-changed-11135ab8-7999-42aa-8036-2c6b47a82768. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:35:54 compute-0 nova_compute[185191]: 2026-01-27 15:35:54.300 185195 DEBUG oslo_concurrency.lockutils [req-b28936e9-cbf0-47ef-ba37-698b53354a43 req-d0cba1f7-1727-44cb-9b2a-ec74b039c5f2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-6c1eac15-4acf-423d-817f-805a374bb405" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:35:54 compute-0 podman[249643]: 2026-01-27 15:35:54.316140295 +0000 UTC m=+0.075771398 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:35:54 compute-0 nova_compute[185191]: 2026-01-27 15:35:54.534 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.188 185195 DEBUG nova.network.neutron [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Updating instance_info_cache with network_info: [{"id": "11135ab8-7999-42aa-8036-2c6b47a82768", "address": "fa:16:3e:86:78:0f", "network": {"id": "d02266ee-be50-465e-a4c8-fe7fe20c6f96", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-744390261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "872630f403b24cda8e3ab59acbe33b66", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11135ab8-79", "ovs_interfaceid": "11135ab8-7999-42aa-8036-2c6b47a82768", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.388 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Releasing lock "refresh_cache-6c1eac15-4acf-423d-817f-805a374bb405" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.388 185195 DEBUG nova.compute.manager [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Instance network_info: |[{"id": "11135ab8-7999-42aa-8036-2c6b47a82768", "address": "fa:16:3e:86:78:0f", "network": {"id": "d02266ee-be50-465e-a4c8-fe7fe20c6f96", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-744390261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "872630f403b24cda8e3ab59acbe33b66", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11135ab8-79", "ovs_interfaceid": "11135ab8-7999-42aa-8036-2c6b47a82768", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.389 185195 DEBUG oslo_concurrency.lockutils [req-b28936e9-cbf0-47ef-ba37-698b53354a43 req-d0cba1f7-1727-44cb-9b2a-ec74b039c5f2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-6c1eac15-4acf-423d-817f-805a374bb405" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.390 185195 DEBUG nova.network.neutron [req-b28936e9-cbf0-47ef-ba37-698b53354a43 req-d0cba1f7-1727-44cb-9b2a-ec74b039c5f2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Refreshing network info cache for port 11135ab8-7999-42aa-8036-2c6b47a82768 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.393 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Start _get_guest_xml network_info=[{"id": "11135ab8-7999-42aa-8036-2c6b47a82768", "address": "fa:16:3e:86:78:0f", "network": {"id": "d02266ee-be50-465e-a4c8-fe7fe20c6f96", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-744390261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "872630f403b24cda8e3ab59acbe33b66", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11135ab8-79", "ovs_interfaceid": "11135ab8-7999-42aa-8036-2c6b47a82768", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:34:19Z,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:34:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': 'fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.399 185195 WARNING nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.408 185195 DEBUG nova.virt.libvirt.host [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.409 185195 DEBUG nova.virt.libvirt.host [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.419 185195 DEBUG nova.virt.libvirt.host [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.420 185195 DEBUG nova.virt.libvirt.host [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.421 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.421 185195 DEBUG nova.virt.hardware [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T15:34:18Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='aed09843-3292-40b2-b829-c4ed118e135f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:34:19Z,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:34:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.422 185195 DEBUG nova.virt.hardware [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.422 185195 DEBUG nova.virt.hardware [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.423 185195 DEBUG nova.virt.hardware [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.423 185195 DEBUG nova.virt.hardware [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.424 185195 DEBUG nova.virt.hardware [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.424 185195 DEBUG nova.virt.hardware [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.425 185195 DEBUG nova.virt.hardware [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.425 185195 DEBUG nova.virt.hardware [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.426 185195 DEBUG nova.virt.hardware [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.426 185195 DEBUG nova.virt.hardware [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.429 185195 DEBUG nova.virt.libvirt.vif [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:35:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1753998192',display_name='tempest-ServersTestManualDisk-server-1753998192',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1753998192',id=8,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO7iZFgTagHGhdMoGRpAvvVWrZ6SDns1JkVmgs4rz+kHZN+1VJdDk1Lqzdx0u3ZQt32yWI9Aa5KzVBclBwn/lqa8OUPqiskz4nKannJLUhdZDRmEZfBSllbq957QrkWsDA==',key_name='tempest-keypair-1572229790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='872630f403b24cda8e3ab59acbe33b66',ramdisk_id='',reservation_id='r-gf7s370k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1680033429',owner_user_name='tempest-ServersTestManualDisk-1680033429-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:35:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e4d1728be0c14934b0fb170d90f2cf80',uuid=6c1eac15-4acf-423d-817f-805a374bb405,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11135ab8-7999-42aa-8036-2c6b47a82768", "address": "fa:16:3e:86:78:0f", "network": {"id": "d02266ee-be50-465e-a4c8-fe7fe20c6f96", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-744390261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "872630f403b24cda8e3ab59acbe33b66", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11135ab8-79", "ovs_interfaceid": "11135ab8-7999-42aa-8036-2c6b47a82768", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.430 185195 DEBUG nova.network.os_vif_util [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Converting VIF {"id": "11135ab8-7999-42aa-8036-2c6b47a82768", "address": "fa:16:3e:86:78:0f", "network": {"id": "d02266ee-be50-465e-a4c8-fe7fe20c6f96", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-744390261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "872630f403b24cda8e3ab59acbe33b66", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11135ab8-79", "ovs_interfaceid": "11135ab8-7999-42aa-8036-2c6b47a82768", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.430 185195 DEBUG nova.network.os_vif_util [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:78:0f,bridge_name='br-int',has_traffic_filtering=True,id=11135ab8-7999-42aa-8036-2c6b47a82768,network=Network(d02266ee-be50-465e-a4c8-fe7fe20c6f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11135ab8-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.432 185195 DEBUG nova.objects.instance [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c1eac15-4acf-423d-817f-805a374bb405 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.450 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] End _get_guest_xml xml=<domain type="kvm">
Jan 27 15:35:55 compute-0 nova_compute[185191]:   <uuid>6c1eac15-4acf-423d-817f-805a374bb405</uuid>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   <name>instance-00000008</name>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   <memory>131072</memory>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   <vcpu>1</vcpu>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   <metadata>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <nova:name>tempest-ServersTestManualDisk-server-1753998192</nova:name>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <nova:creationTime>2026-01-27 15:35:55</nova:creationTime>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <nova:flavor name="m1.nano">
Jan 27 15:35:55 compute-0 nova_compute[185191]:         <nova:memory>128</nova:memory>
Jan 27 15:35:55 compute-0 nova_compute[185191]:         <nova:disk>1</nova:disk>
Jan 27 15:35:55 compute-0 nova_compute[185191]:         <nova:swap>0</nova:swap>
Jan 27 15:35:55 compute-0 nova_compute[185191]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 15:35:55 compute-0 nova_compute[185191]:         <nova:vcpus>1</nova:vcpus>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       </nova:flavor>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <nova:owner>
Jan 27 15:35:55 compute-0 nova_compute[185191]:         <nova:user uuid="e4d1728be0c14934b0fb170d90f2cf80">tempest-ServersTestManualDisk-1680033429-project-member</nova:user>
Jan 27 15:35:55 compute-0 nova_compute[185191]:         <nova:project uuid="872630f403b24cda8e3ab59acbe33b66">tempest-ServersTestManualDisk-1680033429</nova:project>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       </nova:owner>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <nova:root type="image" uuid="fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <nova:ports>
Jan 27 15:35:55 compute-0 nova_compute[185191]:         <nova:port uuid="11135ab8-7999-42aa-8036-2c6b47a82768">
Jan 27 15:35:55 compute-0 nova_compute[185191]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:         </nova:port>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       </nova:ports>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     </nova:instance>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   </metadata>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   <sysinfo type="smbios">
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <system>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <entry name="manufacturer">RDO</entry>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <entry name="product">OpenStack Compute</entry>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <entry name="serial">6c1eac15-4acf-423d-817f-805a374bb405</entry>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <entry name="uuid">6c1eac15-4acf-423d-817f-805a374bb405</entry>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <entry name="family">Virtual Machine</entry>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     </system>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   </sysinfo>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   <os>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <boot dev="hd"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <smbios mode="sysinfo"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   </os>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   <features>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <acpi/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <apic/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <vmcoreinfo/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   </features>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   <clock offset="utc">
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <timer name="hpet" present="no"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   </clock>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   <cpu mode="host-model" match="exact">
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405/disk"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <target dev="vda" bus="virtio"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <disk type="file" device="cdrom">
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405/disk.config"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <target dev="sda" bus="sata"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <interface type="ethernet">
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <mac address="fa:16:3e:86:78:0f"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <mtu size="1442"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <target dev="tap11135ab8-79"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <serial type="pty">
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <log file="/var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405/console.log" append="off"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     </serial>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <video>
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     </video>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <input type="tablet" bus="usb"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <rng model="virtio">
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <backend model="random">/dev/urandom</backend>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <controller type="usb" index="0"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     <memballoon model="virtio">
Jan 27 15:35:55 compute-0 nova_compute[185191]:       <stats period="10"/>
Jan 27 15:35:55 compute-0 nova_compute[185191]:     </memballoon>
Jan 27 15:35:55 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:35:55 compute-0 nova_compute[185191]: </domain>
Jan 27 15:35:55 compute-0 nova_compute[185191]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.458 185195 DEBUG nova.compute.manager [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Preparing to wait for external event network-vif-plugged-11135ab8-7999-42aa-8036-2c6b47a82768 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.459 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Acquiring lock "6c1eac15-4acf-423d-817f-805a374bb405-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.459 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.459 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.460 185195 DEBUG nova.virt.libvirt.vif [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:35:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1753998192',display_name='tempest-ServersTestManualDisk-server-1753998192',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1753998192',id=8,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO7iZFgTagHGhdMoGRpAvvVWrZ6SDns1JkVmgs4rz+kHZN+1VJdDk1Lqzdx0u3ZQt32yWI9Aa5KzVBclBwn/lqa8OUPqiskz4nKannJLUhdZDRmEZfBSllbq957QrkWsDA==',key_name='tempest-keypair-1572229790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='872630f403b24cda8e3ab59acbe33b66',ramdisk_id='',reservation_id='r-gf7s370k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1680033429',owner_user_name='tempest-ServersTestManualDisk-1680033429-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:35:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e4d1728be0c14934b0fb170d90f2cf80',uuid=6c1eac15-4acf-423d-817f-805a374bb405,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11135ab8-7999-42aa-8036-2c6b47a82768", "address": "fa:16:3e:86:78:0f", "network": {"id": "d02266ee-be50-465e-a4c8-fe7fe20c6f96", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-744390261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "872630f403b24cda8e3ab59acbe33b66", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11135ab8-79", "ovs_interfaceid": "11135ab8-7999-42aa-8036-2c6b47a82768", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.460 185195 DEBUG nova.network.os_vif_util [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Converting VIF {"id": "11135ab8-7999-42aa-8036-2c6b47a82768", "address": "fa:16:3e:86:78:0f", "network": {"id": "d02266ee-be50-465e-a4c8-fe7fe20c6f96", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-744390261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "872630f403b24cda8e3ab59acbe33b66", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11135ab8-79", "ovs_interfaceid": "11135ab8-7999-42aa-8036-2c6b47a82768", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.460 185195 DEBUG nova.network.os_vif_util [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:78:0f,bridge_name='br-int',has_traffic_filtering=True,id=11135ab8-7999-42aa-8036-2c6b47a82768,network=Network(d02266ee-be50-465e-a4c8-fe7fe20c6f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11135ab8-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.461 185195 DEBUG os_vif [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:78:0f,bridge_name='br-int',has_traffic_filtering=True,id=11135ab8-7999-42aa-8036-2c6b47a82768,network=Network(d02266ee-be50-465e-a4c8-fe7fe20c6f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11135ab8-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.461 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.461 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.462 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.464 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.465 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11135ab8-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.465 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap11135ab8-79, col_values=(('external_ids', {'iface-id': '11135ab8-7999-42aa-8036-2c6b47a82768', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:78:0f', 'vm-uuid': '6c1eac15-4acf-423d-817f-805a374bb405'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.467 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:55 compute-0 NetworkManager[56090]: <info>  [1769528155.4692] manager: (tap11135ab8-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.471 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.478 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.478 185195 INFO os_vif [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:78:0f,bridge_name='br-int',has_traffic_filtering=True,id=11135ab8-7999-42aa-8036-2c6b47a82768,network=Network(d02266ee-be50-465e-a4c8-fe7fe20c6f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11135ab8-79')
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.551 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.551 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.552 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] No VIF found with MAC fa:16:3e:86:78:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.553 185195 INFO nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Using config drive
Jan 27 15:35:55 compute-0 nova_compute[185191]: 2026-01-27 15:35:55.942 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:56 compute-0 nova_compute[185191]: 2026-01-27 15:35:56.572 185195 INFO nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Creating config drive at /var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405/disk.config
Jan 27 15:35:56 compute-0 nova_compute[185191]: 2026-01-27 15:35:56.578 185195 DEBUG oslo_concurrency.processutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx618h2x9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:35:56 compute-0 nova_compute[185191]: 2026-01-27 15:35:56.707 185195 DEBUG oslo_concurrency.processutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx618h2x9" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:35:56 compute-0 kernel: tap11135ab8-79: entered promiscuous mode
Jan 27 15:35:56 compute-0 NetworkManager[56090]: <info>  [1769528156.7783] manager: (tap11135ab8-79): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Jan 27 15:35:56 compute-0 nova_compute[185191]: 2026-01-27 15:35:56.781 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:56 compute-0 ovn_controller[97541]: 2026-01-27T15:35:56Z|00076|binding|INFO|Claiming lport 11135ab8-7999-42aa-8036-2c6b47a82768 for this chassis.
Jan 27 15:35:56 compute-0 ovn_controller[97541]: 2026-01-27T15:35:56Z|00077|binding|INFO|11135ab8-7999-42aa-8036-2c6b47a82768: Claiming fa:16:3e:86:78:0f 10.100.0.10
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.793 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:78:0f 10.100.0.10'], port_security=['fa:16:3e:86:78:0f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6c1eac15-4acf-423d-817f-805a374bb405', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d02266ee-be50-465e-a4c8-fe7fe20c6f96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '872630f403b24cda8e3ab59acbe33b66', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3d8c547c-c22d-49d2-bec0-08b83395a404', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7549cdd9-81a6-48c3-b592-75b552935131, chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=11135ab8-7999-42aa-8036-2c6b47a82768) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.794 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 11135ab8-7999-42aa-8036-2c6b47a82768 in datapath d02266ee-be50-465e-a4c8-fe7fe20c6f96 bound to our chassis
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.796 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d02266ee-be50-465e-a4c8-fe7fe20c6f96
Jan 27 15:35:56 compute-0 ovn_controller[97541]: 2026-01-27T15:35:56Z|00078|binding|INFO|Setting lport 11135ab8-7999-42aa-8036-2c6b47a82768 ovn-installed in OVS
Jan 27 15:35:56 compute-0 ovn_controller[97541]: 2026-01-27T15:35:56Z|00079|binding|INFO|Setting lport 11135ab8-7999-42aa-8036-2c6b47a82768 up in Southbound
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.808 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f252b346-6a61-4fb6-85b8-29a949c5d65d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:56 compute-0 nova_compute[185191]: 2026-01-27 15:35:56.809 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:56 compute-0 nova_compute[185191]: 2026-01-27 15:35:56.813 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.815 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd02266ee-b1 in ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.817 238613 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd02266ee-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.817 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1eaa3772-6f86-4194-a26a-9e30088eeef1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.819 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fa402fe9-a6bd-4fc1-9f86-633b0f162062]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.832 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[a9532a05-2ed1-45cf-819d-f9f559d04fca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:56 compute-0 systemd-machined[156506]: New machine qemu-8-instance-00000008.
Jan 27 15:35:56 compute-0 systemd-udevd[249692]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:35:56 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Jan 27 15:35:56 compute-0 NetworkManager[56090]: <info>  [1769528156.8590] device (tap11135ab8-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.858 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[403490be-2629-4d9a-93c6-30cabc15c100]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:56 compute-0 NetworkManager[56090]: <info>  [1769528156.8638] device (tap11135ab8-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.888 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[dbd194c2-379b-42fa-ba8b-799f8d799485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:56 compute-0 systemd-udevd[249694]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:35:56 compute-0 NetworkManager[56090]: <info>  [1769528156.9014] manager: (tapd02266ee-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.904 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0e6824af-6698-434c-a3cb-2cfd21b88bf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.938 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[bd8e3726-72ee-43e4-9747-af2a85019693]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.941 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[f4b37171-ed53-4642-8271-17946eee3c1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:56 compute-0 NetworkManager[56090]: <info>  [1769528156.9632] device (tapd02266ee-b0): carrier: link connected
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.967 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[c04415f0-1362-4504-bdef-f7ac66489501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.983 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[08ec824d-f35d-4677-9cd5-d7ff6793935a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd02266ee-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:4b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573269, 'reachable_time': 17133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249722, 'error': None, 'target': 'ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:56 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:56.998 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9de099-43ac-4b53-9fe3-97b2abb3212c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:4b56'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573269, 'tstamp': 573269}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249723, 'error': None, 'target': 'ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:57.016 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[21ecaf3f-194a-4dc2-96ff-9245f861d40e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd02266ee-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:4b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573269, 'reachable_time': 17133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249724, 'error': None, 'target': 'ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:57.048 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d3563a24-3298-4c56-85a4-3fb6c31dfdbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:57.116 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[577c3888-e221-46a0-8f25-6b490cf2fcd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:57.118 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd02266ee-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:57.118 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:57.119 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd02266ee-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.121 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:57 compute-0 kernel: tapd02266ee-b0: entered promiscuous mode
Jan 27 15:35:57 compute-0 NetworkManager[56090]: <info>  [1769528157.1225] manager: (tapd02266ee-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:57.128 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd02266ee-b0, col_values=(('external_ids', {'iface-id': '12e84931-8000-450c-908f-e753b71be68a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.126 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.131 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:57 compute-0 ovn_controller[97541]: 2026-01-27T15:35:57Z|00080|binding|INFO|Releasing lport 12e84931-8000-450c-908f-e753b71be68a from this chassis (sb_readonly=0)
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.133 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:57.136 106793 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d02266ee-be50-465e-a4c8-fe7fe20c6f96.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d02266ee-be50-465e-a4c8-fe7fe20c6f96.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:57.137 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2084aac1-9212-4d41-b4bb-2a268a6eac56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:57.138 106793 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: global
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     log         /dev/log local0 debug
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     log-tag     haproxy-metadata-proxy-d02266ee-be50-465e-a4c8-fe7fe20c6f96
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     user        root
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     group       root
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     maxconn     1024
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     pidfile     /var/lib/neutron/external/pids/d02266ee-be50-465e-a4c8-fe7fe20c6f96.pid.haproxy
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     daemon
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: defaults
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     log global
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     mode http
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     option httplog
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     option dontlognull
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     option http-server-close
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     option forwardfor
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     retries                 3
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     timeout http-request    30s
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     timeout connect         30s
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     timeout client          32s
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     timeout server          32s
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     timeout http-keep-alive 30s
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: listen listener
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     bind 169.254.169.254:80
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:     http-request add-header X-OVN-Network-ID d02266ee-be50-465e-a4c8-fe7fe20c6f96
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 15:35:57 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:35:57.141 106793 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96', 'env', 'PROCESS_TAG=haproxy-d02266ee-be50-465e-a4c8-fe7fe20c6f96', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d02266ee-be50-465e-a4c8-fe7fe20c6f96.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.146 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.196 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528157.1954823, 6c1eac15-4acf-423d-817f-805a374bb405 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.197 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] VM Started (Lifecycle Event)
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.347 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.354 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528157.1957724, 6c1eac15-4acf-423d-817f-805a374bb405 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.355 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] VM Paused (Lifecycle Event)
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.380 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.393 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.419 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:35:57 compute-0 podman[249762]: 2026-01-27 15:35:57.538384082 +0000 UTC m=+0.068272748 container create dc136b288a704854a2d601a98e8d321182302455af5b37e2111609da364d8889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:35:57 compute-0 systemd[1]: Started libpod-conmon-dc136b288a704854a2d601a98e8d321182302455af5b37e2111609da364d8889.scope.
Jan 27 15:35:57 compute-0 podman[249762]: 2026-01-27 15:35:57.503396066 +0000 UTC m=+0.033284752 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 15:35:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:35:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b044cee2cbdcaeb81072e173bbcb5c7a4a42c30fa38edcaeacd62662b895d394/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 15:35:57 compute-0 podman[249762]: 2026-01-27 15:35:57.659105662 +0000 UTC m=+0.188994358 container init dc136b288a704854a2d601a98e8d321182302455af5b37e2111609da364d8889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 15:35:57 compute-0 podman[249762]: 2026-01-27 15:35:57.668492813 +0000 UTC m=+0.198381479 container start dc136b288a704854a2d601a98e8d321182302455af5b37e2111609da364d8889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:35:57 compute-0 neutron-haproxy-ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96[249776]: [NOTICE]   (249780) : New worker (249782) forked
Jan 27 15:35:57 compute-0 neutron-haproxy-ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96[249776]: [NOTICE]   (249780) : Loading success.
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.826 185195 DEBUG nova.compute.manager [req-c27d7934-4bd7-424a-8c84-23e2b5547d2b req-51c8a74f-b713-4015-add1-8ddbef93222a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Received event network-changed-33cb1013-4786-49f5-a482-721c6aeb907b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.827 185195 DEBUG nova.compute.manager [req-c27d7934-4bd7-424a-8c84-23e2b5547d2b req-51c8a74f-b713-4015-add1-8ddbef93222a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Refreshing instance network info cache due to event network-changed-33cb1013-4786-49f5-a482-721c6aeb907b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.828 185195 DEBUG oslo_concurrency.lockutils [req-c27d7934-4bd7-424a-8c84-23e2b5547d2b req-51c8a74f-b713-4015-add1-8ddbef93222a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.828 185195 DEBUG oslo_concurrency.lockutils [req-c27d7934-4bd7-424a-8c84-23e2b5547d2b req-51c8a74f-b713-4015-add1-8ddbef93222a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:35:57 compute-0 nova_compute[185191]: 2026-01-27 15:35:57.829 185195 DEBUG nova.network.neutron [req-c27d7934-4bd7-424a-8c84-23e2b5547d2b req-51c8a74f-b713-4015-add1-8ddbef93222a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Refreshing network info cache for port 33cb1013-4786-49f5-a482-721c6aeb907b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:35:58 compute-0 nova_compute[185191]: 2026-01-27 15:35:58.387 185195 DEBUG nova.network.neutron [req-b28936e9-cbf0-47ef-ba37-698b53354a43 req-d0cba1f7-1727-44cb-9b2a-ec74b039c5f2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Updated VIF entry in instance network info cache for port 11135ab8-7999-42aa-8036-2c6b47a82768. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:35:58 compute-0 nova_compute[185191]: 2026-01-27 15:35:58.389 185195 DEBUG nova.network.neutron [req-b28936e9-cbf0-47ef-ba37-698b53354a43 req-d0cba1f7-1727-44cb-9b2a-ec74b039c5f2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Updating instance_info_cache with network_info: [{"id": "11135ab8-7999-42aa-8036-2c6b47a82768", "address": "fa:16:3e:86:78:0f", "network": {"id": "d02266ee-be50-465e-a4c8-fe7fe20c6f96", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-744390261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "872630f403b24cda8e3ab59acbe33b66", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11135ab8-79", "ovs_interfaceid": "11135ab8-7999-42aa-8036-2c6b47a82768", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:35:58 compute-0 nova_compute[185191]: 2026-01-27 15:35:58.495 185195 DEBUG oslo_concurrency.lockutils [req-b28936e9-cbf0-47ef-ba37-698b53354a43 req-d0cba1f7-1727-44cb-9b2a-ec74b039c5f2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-6c1eac15-4acf-423d-817f-805a374bb405" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:35:58 compute-0 nova_compute[185191]: 2026-01-27 15:35:58.946 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:35:59 compute-0 nova_compute[185191]: 2026-01-27 15:35:59.586 185195 DEBUG nova.compute.manager [req-912af6a7-5814-4088-a225-15b8e3772533 req-ee0b9c7b-f22c-41b1-83ed-489b9554863e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Received event network-changed-be6e3ca2-5630-4d59-904c-810951329397 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:35:59 compute-0 nova_compute[185191]: 2026-01-27 15:35:59.587 185195 DEBUG nova.compute.manager [req-912af6a7-5814-4088-a225-15b8e3772533 req-ee0b9c7b-f22c-41b1-83ed-489b9554863e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Refreshing instance network info cache due to event network-changed-be6e3ca2-5630-4d59-904c-810951329397. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:35:59 compute-0 nova_compute[185191]: 2026-01-27 15:35:59.588 185195 DEBUG oslo_concurrency.lockutils [req-912af6a7-5814-4088-a225-15b8e3772533 req-ee0b9c7b-f22c-41b1-83ed-489b9554863e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-eae5a95c-09c0-4c0b-ae8f-3ab2659972b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:35:59 compute-0 nova_compute[185191]: 2026-01-27 15:35:59.588 185195 DEBUG oslo_concurrency.lockutils [req-912af6a7-5814-4088-a225-15b8e3772533 req-ee0b9c7b-f22c-41b1-83ed-489b9554863e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-eae5a95c-09c0-4c0b-ae8f-3ab2659972b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:35:59 compute-0 nova_compute[185191]: 2026-01-27 15:35:59.589 185195 DEBUG nova.network.neutron [req-912af6a7-5814-4088-a225-15b8e3772533 req-ee0b9c7b-f22c-41b1-83ed-489b9554863e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Refreshing network info cache for port be6e3ca2-5630-4d59-904c-810951329397 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:35:59 compute-0 podman[201073]: time="2026-01-27T15:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:35:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30972 "" "Go-http-client/1.1"
Jan 27 15:35:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5309 "" "Go-http-client/1.1"
Jan 27 15:36:00 compute-0 ovn_controller[97541]: 2026-01-27T15:36:00Z|00081|binding|INFO|Releasing lport 09357bac-861f-495f-9fcb-374ff41c059c from this chassis (sb_readonly=0)
Jan 27 15:36:00 compute-0 ovn_controller[97541]: 2026-01-27T15:36:00Z|00082|binding|INFO|Releasing lport b1652441-1adf-4d7f-af7d-66c93a79a206 from this chassis (sb_readonly=0)
Jan 27 15:36:00 compute-0 ovn_controller[97541]: 2026-01-27T15:36:00Z|00083|binding|INFO|Releasing lport 12e84931-8000-450c-908f-e753b71be68a from this chassis (sb_readonly=0)
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.110 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:00.256 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:00.258 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:00.259 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.468 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.754 185195 DEBUG nova.compute.manager [req-b118bdb3-f15a-4cb5-84e9-41a19c7a8c04 req-be155cf5-6bbf-40c5-90df-ba26cce288f3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Received event network-vif-plugged-11135ab8-7999-42aa-8036-2c6b47a82768 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.756 185195 DEBUG oslo_concurrency.lockutils [req-b118bdb3-f15a-4cb5-84e9-41a19c7a8c04 req-be155cf5-6bbf-40c5-90df-ba26cce288f3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "6c1eac15-4acf-423d-817f-805a374bb405-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.757 185195 DEBUG oslo_concurrency.lockutils [req-b118bdb3-f15a-4cb5-84e9-41a19c7a8c04 req-be155cf5-6bbf-40c5-90df-ba26cce288f3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.757 185195 DEBUG oslo_concurrency.lockutils [req-b118bdb3-f15a-4cb5-84e9-41a19c7a8c04 req-be155cf5-6bbf-40c5-90df-ba26cce288f3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.758 185195 DEBUG nova.compute.manager [req-b118bdb3-f15a-4cb5-84e9-41a19c7a8c04 req-be155cf5-6bbf-40c5-90df-ba26cce288f3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Processing event network-vif-plugged-11135ab8-7999-42aa-8036-2c6b47a82768 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.758 185195 DEBUG nova.compute.manager [req-b118bdb3-f15a-4cb5-84e9-41a19c7a8c04 req-be155cf5-6bbf-40c5-90df-ba26cce288f3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Received event network-vif-plugged-11135ab8-7999-42aa-8036-2c6b47a82768 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.759 185195 DEBUG oslo_concurrency.lockutils [req-b118bdb3-f15a-4cb5-84e9-41a19c7a8c04 req-be155cf5-6bbf-40c5-90df-ba26cce288f3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "6c1eac15-4acf-423d-817f-805a374bb405-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.759 185195 DEBUG oslo_concurrency.lockutils [req-b118bdb3-f15a-4cb5-84e9-41a19c7a8c04 req-be155cf5-6bbf-40c5-90df-ba26cce288f3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.760 185195 DEBUG oslo_concurrency.lockutils [req-b118bdb3-f15a-4cb5-84e9-41a19c7a8c04 req-be155cf5-6bbf-40c5-90df-ba26cce288f3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.760 185195 DEBUG nova.compute.manager [req-b118bdb3-f15a-4cb5-84e9-41a19c7a8c04 req-be155cf5-6bbf-40c5-90df-ba26cce288f3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] No waiting events found dispatching network-vif-plugged-11135ab8-7999-42aa-8036-2c6b47a82768 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.760 185195 WARNING nova.compute.manager [req-b118bdb3-f15a-4cb5-84e9-41a19c7a8c04 req-be155cf5-6bbf-40c5-90df-ba26cce288f3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Received unexpected event network-vif-plugged-11135ab8-7999-42aa-8036-2c6b47a82768 for instance with vm_state building and task_state spawning.
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.761 185195 DEBUG nova.compute.manager [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.768 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528160.7666242, 6c1eac15-4acf-423d-817f-805a374bb405 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.769 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] VM Resumed (Lifecycle Event)
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.771 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.784 185195 INFO nova.virt.libvirt.driver [-] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Instance spawned successfully.
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.785 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.838 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.843 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.874 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.875 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.876 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.877 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.877 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.878 185195 DEBUG nova.virt.libvirt.driver [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.948 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:00 compute-0 nova_compute[185191]: 2026-01-27 15:36:00.970 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:36:01 compute-0 nova_compute[185191]: 2026-01-27 15:36:01.404 185195 INFO nova.compute.manager [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Took 13.67 seconds to spawn the instance on the hypervisor.
Jan 27 15:36:01 compute-0 nova_compute[185191]: 2026-01-27 15:36:01.404 185195 DEBUG nova.compute.manager [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:36:01 compute-0 openstack_network_exporter[204239]: ERROR   15:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:36:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:36:01 compute-0 openstack_network_exporter[204239]: ERROR   15:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:36:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:36:01 compute-0 nova_compute[185191]: 2026-01-27 15:36:01.476 185195 DEBUG nova.network.neutron [req-c27d7934-4bd7-424a-8c84-23e2b5547d2b req-51c8a74f-b713-4015-add1-8ddbef93222a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Updated VIF entry in instance network info cache for port 33cb1013-4786-49f5-a482-721c6aeb907b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:36:01 compute-0 nova_compute[185191]: 2026-01-27 15:36:01.476 185195 DEBUG nova.network.neutron [req-c27d7934-4bd7-424a-8c84-23e2b5547d2b req-51c8a74f-b713-4015-add1-8ddbef93222a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Updating instance_info_cache with network_info: [{"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:36:01 compute-0 nova_compute[185191]: 2026-01-27 15:36:01.903 185195 DEBUG oslo_concurrency.lockutils [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Acquiring lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:01 compute-0 nova_compute[185191]: 2026-01-27 15:36:01.903 185195 DEBUG oslo_concurrency.lockutils [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:01 compute-0 nova_compute[185191]: 2026-01-27 15:36:01.903 185195 DEBUG oslo_concurrency.lockutils [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Acquiring lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:01 compute-0 nova_compute[185191]: 2026-01-27 15:36:01.903 185195 DEBUG oslo_concurrency.lockutils [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:01 compute-0 nova_compute[185191]: 2026-01-27 15:36:01.904 185195 DEBUG oslo_concurrency.lockutils [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:01 compute-0 nova_compute[185191]: 2026-01-27 15:36:01.905 185195 INFO nova.compute.manager [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Terminating instance
Jan 27 15:36:01 compute-0 nova_compute[185191]: 2026-01-27 15:36:01.906 185195 DEBUG nova.compute.manager [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 15:36:01 compute-0 kernel: tapbe6e3ca2-56 (unregistering): left promiscuous mode
Jan 27 15:36:01 compute-0 NetworkManager[56090]: <info>  [1769528161.9414] device (tapbe6e3ca2-56): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 15:36:01 compute-0 ovn_controller[97541]: 2026-01-27T15:36:01Z|00084|binding|INFO|Releasing lport be6e3ca2-5630-4d59-904c-810951329397 from this chassis (sb_readonly=0)
Jan 27 15:36:01 compute-0 nova_compute[185191]: 2026-01-27 15:36:01.949 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:01 compute-0 ovn_controller[97541]: 2026-01-27T15:36:01Z|00085|binding|INFO|Setting lport be6e3ca2-5630-4d59-904c-810951329397 down in Southbound
Jan 27 15:36:01 compute-0 ovn_controller[97541]: 2026-01-27T15:36:01Z|00086|binding|INFO|Removing iface tapbe6e3ca2-56 ovn-installed in OVS
Jan 27 15:36:01 compute-0 nova_compute[185191]: 2026-01-27 15:36:01.955 185195 DEBUG oslo_concurrency.lockutils [req-c27d7934-4bd7-424a-8c84-23e2b5547d2b req-51c8a74f-b713-4015-add1-8ddbef93222a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:36:01 compute-0 nova_compute[185191]: 2026-01-27 15:36:01.956 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:01 compute-0 nova_compute[185191]: 2026-01-27 15:36:01.968 185195 INFO nova.compute.manager [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Took 14.76 seconds to build instance.
Jan 27 15:36:01 compute-0 nova_compute[185191]: 2026-01-27 15:36:01.972 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:01 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 27 15:36:01 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 10.410s CPU time.
Jan 27 15:36:01 compute-0 systemd-machined[156506]: Machine qemu-7-instance-00000007 terminated.
Jan 27 15:36:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:02.107 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:9e:58 10.100.0.9'], port_security=['fa:16:3e:2a:9e:58 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'eae5a95c-09c0-4c0b-ae8f-3ab2659972b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9f2c5d84ff64d1da269b157e0956b5a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6e5d012e-5545-41ed-9611-56cb4722c00b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a45e99c1-8db4-4465-9c01-3a3163fc599d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=be6e3ca2-5630-4d59-904c-810951329397) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:36:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:02.108 106793 INFO neutron.agent.ovn.metadata.agent [-] Port be6e3ca2-5630-4d59-904c-810951329397 in datapath fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348 unbound from our chassis
Jan 27 15:36:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:02.110 106793 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 15:36:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:02.112 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c729af9c-27c6-4575-a15c-52338dbb5d5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:02.113 106793 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348 namespace which is not needed anymore
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.127 185195 DEBUG oslo_concurrency.lockutils [None req-52c6c207-91ea-47d9-a4d3-e481950e886f e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.131 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.140 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.185 185195 INFO nova.virt.libvirt.driver [-] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Instance destroyed successfully.
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.186 185195 DEBUG nova.objects.instance [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lazy-loading 'resources' on Instance uuid eae5a95c-09c0-4c0b-ae8f-3ab2659972b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.231 185195 DEBUG nova.virt.libvirt.vif [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:35:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-74227258',display_name='tempest-ServersTestJSON-server-74227258',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-74227258',id=7,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEOHGu+sKZvpneTa1cl6PsMBo1htCB/ReJvXR4yYIPfN3V7p8Dge6SHSk5NbvWpfuCZ5QNNfus6hdCAeuxco1wUwmLDMTJh0we5kyHGikIwmpz1xRz0qGF9R9HJmnBRA1Q==',key_name='tempest-keypair-1427022892',keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:35:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9f2c5d84ff64d1da269b157e0956b5a',ramdisk_id='',reservation_id='r-zu48u0rc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1062714811',owner_user_name='tempest-ServersTestJSON-1062714811-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T15:35:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='405a986111cf446a943b8d37c3022002',uuid=eae5a95c-09c0-4c0b-ae8f-3ab2659972b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be6e3ca2-5630-4d59-904c-810951329397", "address": "fa:16:3e:2a:9e:58", "network": {"id": "fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348", "bridge": "br-int", "label": "tempest-ServersTestJSON-679938800-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9f2c5d84ff64d1da269b157e0956b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6e3ca2-56", "ovs_interfaceid": "be6e3ca2-5630-4d59-904c-810951329397", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.231 185195 DEBUG nova.network.os_vif_util [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Converting VIF {"id": "be6e3ca2-5630-4d59-904c-810951329397", "address": "fa:16:3e:2a:9e:58", "network": {"id": "fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348", "bridge": "br-int", "label": "tempest-ServersTestJSON-679938800-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9f2c5d84ff64d1da269b157e0956b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6e3ca2-56", "ovs_interfaceid": "be6e3ca2-5630-4d59-904c-810951329397", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.232 185195 DEBUG nova.network.os_vif_util [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:9e:58,bridge_name='br-int',has_traffic_filtering=True,id=be6e3ca2-5630-4d59-904c-810951329397,network=Network(fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe6e3ca2-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.232 185195 DEBUG os_vif [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:9e:58,bridge_name='br-int',has_traffic_filtering=True,id=be6e3ca2-5630-4d59-904c-810951329397,network=Network(fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe6e3ca2-56') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.234 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.234 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe6e3ca2-56, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.238 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.239 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.241 185195 INFO os_vif [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:9e:58,bridge_name='br-int',has_traffic_filtering=True,id=be6e3ca2-5630-4d59-904c-810951329397,network=Network(fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe6e3ca2-56')
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.241 185195 INFO nova.virt.libvirt.driver [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Deleting instance files /var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8_del
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.242 185195 INFO nova.virt.libvirt.driver [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Deletion of /var/lib/nova/instances/eae5a95c-09c0-4c0b-ae8f-3ab2659972b8_del complete
Jan 27 15:36:02 compute-0 neutron-haproxy-ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348[249609]: [NOTICE]   (249613) : haproxy version is 2.8.14-c23fe91
Jan 27 15:36:02 compute-0 neutron-haproxy-ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348[249609]: [NOTICE]   (249613) : path to executable is /usr/sbin/haproxy
Jan 27 15:36:02 compute-0 neutron-haproxy-ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348[249609]: [WARNING]  (249613) : Exiting Master process...
Jan 27 15:36:02 compute-0 neutron-haproxy-ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348[249609]: [ALERT]    (249613) : Current worker (249615) exited with code 143 (Terminated)
Jan 27 15:36:02 compute-0 neutron-haproxy-ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348[249609]: [WARNING]  (249613) : All workers exited. Exiting... (0)
Jan 27 15:36:02 compute-0 systemd[1]: libpod-2ea84a294f55b89329778c5e268dcd785165dfe4a3fc06a5e21be1426dc1d3b7.scope: Deactivated successfully.
Jan 27 15:36:02 compute-0 podman[249830]: 2026-01-27 15:36:02.320973154 +0000 UTC m=+0.060868260 container died 2ea84a294f55b89329778c5e268dcd785165dfe4a3fc06a5e21be1426dc1d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 15:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ea84a294f55b89329778c5e268dcd785165dfe4a3fc06a5e21be1426dc1d3b7-userdata-shm.mount: Deactivated successfully.
Jan 27 15:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b36270f17ec0aafa1349cdcb3f35c9883242e689f1423333a0d319e96d9ac4ea-merged.mount: Deactivated successfully.
Jan 27 15:36:02 compute-0 podman[249830]: 2026-01-27 15:36:02.39257748 +0000 UTC m=+0.132472596 container cleanup 2ea84a294f55b89329778c5e268dcd785165dfe4a3fc06a5e21be1426dc1d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 15:36:02 compute-0 systemd[1]: libpod-conmon-2ea84a294f55b89329778c5e268dcd785165dfe4a3fc06a5e21be1426dc1d3b7.scope: Deactivated successfully.
Jan 27 15:36:02 compute-0 podman[249860]: 2026-01-27 15:36:02.464179975 +0000 UTC m=+0.047060690 container remove 2ea84a294f55b89329778c5e268dcd785165dfe4a3fc06a5e21be1426dc1d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 15:36:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:02.480 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ba669f55-d32a-4113-a446-0f9d8679d5da]: (4, ('Tue Jan 27 03:36:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348 (2ea84a294f55b89329778c5e268dcd785165dfe4a3fc06a5e21be1426dc1d3b7)\n2ea84a294f55b89329778c5e268dcd785165dfe4a3fc06a5e21be1426dc1d3b7\nTue Jan 27 03:36:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348 (2ea84a294f55b89329778c5e268dcd785165dfe4a3fc06a5e21be1426dc1d3b7)\n2ea84a294f55b89329778c5e268dcd785165dfe4a3fc06a5e21be1426dc1d3b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:02.482 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[96cc68d0-9dc4-4855-878b-46fdbfbd1ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:02.483 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd9b0a9e-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:36:02 compute-0 kernel: tapfd9b0a9e-60: left promiscuous mode
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.485 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.497 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.499 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:02.502 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[639a17c9-96d1-4292-9eef-0c3fcddab5cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:02.517 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c01a5d63-f712-4ced-b1eb-5d02c9360095]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:02.518 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c63875c7-6241-4d23-9680-6fbfefc12eb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:02.535 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[351ab6c9-0cf9-4e6f-8c84-d66f413cf335]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572673, 'reachable_time': 35840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249875, 'error': None, 'target': 'ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:02 compute-0 systemd[1]: run-netns-ovnmeta\x2dfd9b0a9e\x2d62ba\x2d4b27\x2d9d6d\x2dbf7e47fbc348.mount: Deactivated successfully.
Jan 27 15:36:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:02.544 107308 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 15:36:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:02.544 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[d4bfb50c-be54-44de-934a-8c1d4829d8b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.829 185195 INFO nova.compute.manager [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Took 0.92 seconds to destroy the instance on the hypervisor.
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.830 185195 DEBUG oslo.service.loopingcall [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.830 185195 DEBUG nova.compute.manager [-] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 15:36:02 compute-0 nova_compute[185191]: 2026-01-27 15:36:02.831 185195 DEBUG nova.network.neutron [-] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 15:36:03 compute-0 nova_compute[185191]: 2026-01-27 15:36:03.939 185195 DEBUG nova.network.neutron [req-912af6a7-5814-4088-a225-15b8e3772533 req-ee0b9c7b-f22c-41b1-83ed-489b9554863e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Updated VIF entry in instance network info cache for port be6e3ca2-5630-4d59-904c-810951329397. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:36:03 compute-0 nova_compute[185191]: 2026-01-27 15:36:03.940 185195 DEBUG nova.network.neutron [req-912af6a7-5814-4088-a225-15b8e3772533 req-ee0b9c7b-f22c-41b1-83ed-489b9554863e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Updating instance_info_cache with network_info: [{"id": "be6e3ca2-5630-4d59-904c-810951329397", "address": "fa:16:3e:2a:9e:58", "network": {"id": "fd9b0a9e-62ba-4b27-9d6d-bf7e47fbc348", "bridge": "br-int", "label": "tempest-ServersTestJSON-679938800-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9f2c5d84ff64d1da269b157e0956b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6e3ca2-56", "ovs_interfaceid": "be6e3ca2-5630-4d59-904c-810951329397", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:36:04 compute-0 nova_compute[185191]: 2026-01-27 15:36:04.126 185195 DEBUG nova.compute.manager [req-f657899e-8d2a-42fb-a060-355e311108d6 req-aed8d6ca-2840-4878-92b7-50f8a4ddce8e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Received event network-vif-unplugged-be6e3ca2-5630-4d59-904c-810951329397 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:36:04 compute-0 nova_compute[185191]: 2026-01-27 15:36:04.126 185195 DEBUG oslo_concurrency.lockutils [req-f657899e-8d2a-42fb-a060-355e311108d6 req-aed8d6ca-2840-4878-92b7-50f8a4ddce8e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:04 compute-0 nova_compute[185191]: 2026-01-27 15:36:04.127 185195 DEBUG oslo_concurrency.lockutils [req-f657899e-8d2a-42fb-a060-355e311108d6 req-aed8d6ca-2840-4878-92b7-50f8a4ddce8e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:04 compute-0 nova_compute[185191]: 2026-01-27 15:36:04.127 185195 DEBUG oslo_concurrency.lockutils [req-f657899e-8d2a-42fb-a060-355e311108d6 req-aed8d6ca-2840-4878-92b7-50f8a4ddce8e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:04 compute-0 nova_compute[185191]: 2026-01-27 15:36:04.127 185195 DEBUG nova.compute.manager [req-f657899e-8d2a-42fb-a060-355e311108d6 req-aed8d6ca-2840-4878-92b7-50f8a4ddce8e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] No waiting events found dispatching network-vif-unplugged-be6e3ca2-5630-4d59-904c-810951329397 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:36:04 compute-0 nova_compute[185191]: 2026-01-27 15:36:04.127 185195 DEBUG nova.compute.manager [req-f657899e-8d2a-42fb-a060-355e311108d6 req-aed8d6ca-2840-4878-92b7-50f8a4ddce8e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Received event network-vif-unplugged-be6e3ca2-5630-4d59-904c-810951329397 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 15:36:04 compute-0 nova_compute[185191]: 2026-01-27 15:36:04.141 185195 DEBUG oslo_concurrency.lockutils [req-912af6a7-5814-4088-a225-15b8e3772533 req-ee0b9c7b-f22c-41b1-83ed-489b9554863e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-eae5a95c-09c0-4c0b-ae8f-3ab2659972b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:36:05 compute-0 nova_compute[185191]: 2026-01-27 15:36:05.120 185195 DEBUG nova.network.neutron [-] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:36:05 compute-0 nova_compute[185191]: 2026-01-27 15:36:05.946 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:06 compute-0 nova_compute[185191]: 2026-01-27 15:36:06.324 185195 DEBUG nova.compute.manager [req-06e06412-b747-4e42-9a56-04a373916b72 req-4e329430-a157-4a98-89ee-597bd1df004c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Received event network-vif-deleted-be6e3ca2-5630-4d59-904c-810951329397 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:36:06 compute-0 nova_compute[185191]: 2026-01-27 15:36:06.325 185195 INFO nova.compute.manager [req-06e06412-b747-4e42-9a56-04a373916b72 req-4e329430-a157-4a98-89ee-597bd1df004c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Neutron deleted interface be6e3ca2-5630-4d59-904c-810951329397; detaching it from the instance and deleting it from the info cache
Jan 27 15:36:06 compute-0 nova_compute[185191]: 2026-01-27 15:36:06.326 185195 DEBUG nova.network.neutron [req-06e06412-b747-4e42-9a56-04a373916b72 req-4e329430-a157-4a98-89ee-597bd1df004c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:36:06 compute-0 podman[249877]: 2026-01-27 15:36:06.333760004 +0000 UTC m=+0.090736569 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 15:36:06 compute-0 nova_compute[185191]: 2026-01-27 15:36:06.386 185195 DEBUG nova.compute.manager [req-06e06412-b747-4e42-9a56-04a373916b72 req-4e329430-a157-4a98-89ee-597bd1df004c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Detach interface failed, port_id=be6e3ca2-5630-4d59-904c-810951329397, reason: Instance eae5a95c-09c0-4c0b-ae8f-3ab2659972b8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 15:36:06 compute-0 nova_compute[185191]: 2026-01-27 15:36:06.395 185195 INFO nova.compute.manager [-] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Took 3.56 seconds to deallocate network for instance.
Jan 27 15:36:06 compute-0 nova_compute[185191]: 2026-01-27 15:36:06.470 185195 DEBUG oslo_concurrency.lockutils [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:06 compute-0 nova_compute[185191]: 2026-01-27 15:36:06.470 185195 DEBUG oslo_concurrency.lockutils [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:06 compute-0 nova_compute[185191]: 2026-01-27 15:36:06.588 185195 DEBUG nova.compute.provider_tree [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:36:06 compute-0 nova_compute[185191]: 2026-01-27 15:36:06.701 185195 DEBUG nova.compute.manager [req-9f8bb6c7-db45-41ee-ad65-4d5a8cb5661f req-ce1d061e-8800-460d-acba-e46cca4e70b0 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Received event network-vif-plugged-be6e3ca2-5630-4d59-904c-810951329397 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:36:06 compute-0 nova_compute[185191]: 2026-01-27 15:36:06.702 185195 DEBUG oslo_concurrency.lockutils [req-9f8bb6c7-db45-41ee-ad65-4d5a8cb5661f req-ce1d061e-8800-460d-acba-e46cca4e70b0 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:06 compute-0 nova_compute[185191]: 2026-01-27 15:36:06.703 185195 DEBUG oslo_concurrency.lockutils [req-9f8bb6c7-db45-41ee-ad65-4d5a8cb5661f req-ce1d061e-8800-460d-acba-e46cca4e70b0 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:06 compute-0 nova_compute[185191]: 2026-01-27 15:36:06.703 185195 DEBUG oslo_concurrency.lockutils [req-9f8bb6c7-db45-41ee-ad65-4d5a8cb5661f req-ce1d061e-8800-460d-acba-e46cca4e70b0 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:06 compute-0 nova_compute[185191]: 2026-01-27 15:36:06.704 185195 DEBUG nova.compute.manager [req-9f8bb6c7-db45-41ee-ad65-4d5a8cb5661f req-ce1d061e-8800-460d-acba-e46cca4e70b0 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] No waiting events found dispatching network-vif-plugged-be6e3ca2-5630-4d59-904c-810951329397 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:36:06 compute-0 nova_compute[185191]: 2026-01-27 15:36:06.704 185195 WARNING nova.compute.manager [req-9f8bb6c7-db45-41ee-ad65-4d5a8cb5661f req-ce1d061e-8800-460d-acba-e46cca4e70b0 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Received unexpected event network-vif-plugged-be6e3ca2-5630-4d59-904c-810951329397 for instance with vm_state deleted and task_state None.
Jan 27 15:36:06 compute-0 nova_compute[185191]: 2026-01-27 15:36:06.838 185195 DEBUG nova.scheduler.client.report [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:36:07 compute-0 nova_compute[185191]: 2026-01-27 15:36:07.149 185195 DEBUG oslo_concurrency.lockutils [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:07 compute-0 nova_compute[185191]: 2026-01-27 15:36:07.238 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:07 compute-0 nova_compute[185191]: 2026-01-27 15:36:07.267 185195 INFO nova.scheduler.client.report [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Deleted allocations for instance eae5a95c-09c0-4c0b-ae8f-3ab2659972b8
Jan 27 15:36:07 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:07.266 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:36:07 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:07.266 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:36:07 compute-0 nova_compute[185191]: 2026-01-27 15:36:07.269 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:07 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:07.383 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:36:07 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:07.384 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:36:07 compute-0 nova_compute[185191]: 2026-01-27 15:36:07.385 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:07 compute-0 nova_compute[185191]: 2026-01-27 15:36:07.654 185195 DEBUG oslo_concurrency.lockutils [None req-b118ed0f-7216-4edb-8720-60d656799ac9 405a986111cf446a943b8d37c3022002 b9f2c5d84ff64d1da269b157e0956b5a - - default default] Lock "eae5a95c-09c0-4c0b-ae8f-3ab2659972b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:09 compute-0 podman[249898]: 2026-01-27 15:36:09.332142102 +0000 UTC m=+0.081034469 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, vcs-type=git, config_id=openstack_network_exporter, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 27 15:36:09 compute-0 podman[249896]: 2026-01-27 15:36:09.348945532 +0000 UTC m=+0.103829539 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 15:36:09 compute-0 podman[249897]: 2026-01-27 15:36:09.360410138 +0000 UTC m=+0.115111580 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 27 15:36:10 compute-0 nova_compute[185191]: 2026-01-27 15:36:10.949 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:11 compute-0 ovn_controller[97541]: 2026-01-27T15:36:11Z|00087|binding|INFO|Releasing lport 09357bac-861f-495f-9fcb-374ff41c059c from this chassis (sb_readonly=0)
Jan 27 15:36:11 compute-0 ovn_controller[97541]: 2026-01-27T15:36:11Z|00088|binding|INFO|Releasing lport 12e84931-8000-450c-908f-e753b71be68a from this chassis (sb_readonly=0)
Jan 27 15:36:11 compute-0 nova_compute[185191]: 2026-01-27 15:36:11.367 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:12 compute-0 nova_compute[185191]: 2026-01-27 15:36:12.094 185195 DEBUG nova.compute.manager [req-9e38f5c4-d5d3-47b8-a36f-a1c3655aacae req-fbf458be-b8b0-4948-b58d-7246979763cd 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Received event network-changed-11135ab8-7999-42aa-8036-2c6b47a82768 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:36:12 compute-0 nova_compute[185191]: 2026-01-27 15:36:12.095 185195 DEBUG nova.compute.manager [req-9e38f5c4-d5d3-47b8-a36f-a1c3655aacae req-fbf458be-b8b0-4948-b58d-7246979763cd 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Refreshing instance network info cache due to event network-changed-11135ab8-7999-42aa-8036-2c6b47a82768. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:36:12 compute-0 nova_compute[185191]: 2026-01-27 15:36:12.095 185195 DEBUG oslo_concurrency.lockutils [req-9e38f5c4-d5d3-47b8-a36f-a1c3655aacae req-fbf458be-b8b0-4948-b58d-7246979763cd 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-6c1eac15-4acf-423d-817f-805a374bb405" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:36:12 compute-0 nova_compute[185191]: 2026-01-27 15:36:12.096 185195 DEBUG oslo_concurrency.lockutils [req-9e38f5c4-d5d3-47b8-a36f-a1c3655aacae req-fbf458be-b8b0-4948-b58d-7246979763cd 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-6c1eac15-4acf-423d-817f-805a374bb405" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:36:12 compute-0 nova_compute[185191]: 2026-01-27 15:36:12.096 185195 DEBUG nova.network.neutron [req-9e38f5c4-d5d3-47b8-a36f-a1c3655aacae req-fbf458be-b8b0-4948-b58d-7246979763cd 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Refreshing network info cache for port 11135ab8-7999-42aa-8036-2c6b47a82768 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:36:12 compute-0 nova_compute[185191]: 2026-01-27 15:36:12.242 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.509 185195 DEBUG oslo_concurrency.lockutils [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Acquiring lock "6c1eac15-4acf-423d-817f-805a374bb405" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.510 185195 DEBUG oslo_concurrency.lockutils [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.511 185195 DEBUG oslo_concurrency.lockutils [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Acquiring lock "6c1eac15-4acf-423d-817f-805a374bb405-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.511 185195 DEBUG oslo_concurrency.lockutils [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.512 185195 DEBUG oslo_concurrency.lockutils [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.513 185195 INFO nova.compute.manager [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Terminating instance
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.515 185195 DEBUG nova.compute.manager [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 15:36:13 compute-0 kernel: tap11135ab8-79 (unregistering): left promiscuous mode
Jan 27 15:36:13 compute-0 NetworkManager[56090]: <info>  [1769528173.5393] device (tap11135ab8-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 15:36:13 compute-0 ovn_controller[97541]: 2026-01-27T15:36:13Z|00089|binding|INFO|Releasing lport 11135ab8-7999-42aa-8036-2c6b47a82768 from this chassis (sb_readonly=0)
Jan 27 15:36:13 compute-0 ovn_controller[97541]: 2026-01-27T15:36:13Z|00090|binding|INFO|Setting lport 11135ab8-7999-42aa-8036-2c6b47a82768 down in Southbound
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.550 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:13 compute-0 ovn_controller[97541]: 2026-01-27T15:36:13Z|00091|binding|INFO|Removing iface tap11135ab8-79 ovn-installed in OVS
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.556 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.565 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:78:0f 10.100.0.10'], port_security=['fa:16:3e:86:78:0f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6c1eac15-4acf-423d-817f-805a374bb405', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d02266ee-be50-465e-a4c8-fe7fe20c6f96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '872630f403b24cda8e3ab59acbe33b66', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3d8c547c-c22d-49d2-bec0-08b83395a404', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7549cdd9-81a6-48c3-b592-75b552935131, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=11135ab8-7999-42aa-8036-2c6b47a82768) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.567 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 11135ab8-7999-42aa-8036-2c6b47a82768 in datapath d02266ee-be50-465e-a4c8-fe7fe20c6f96 unbound from our chassis
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.569 106793 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d02266ee-be50-465e-a4c8-fe7fe20c6f96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.570 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[69b82dca-4472-497a-b3b7-354ab194c931]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.572 106793 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96 namespace which is not needed anymore
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.575 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:13 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 27 15:36:13 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 13.364s CPU time.
Jan 27 15:36:13 compute-0 systemd-machined[156506]: Machine qemu-8-instance-00000008 terminated.
Jan 27 15:36:13 compute-0 neutron-haproxy-ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96[249776]: [NOTICE]   (249780) : haproxy version is 2.8.14-c23fe91
Jan 27 15:36:13 compute-0 neutron-haproxy-ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96[249776]: [NOTICE]   (249780) : path to executable is /usr/sbin/haproxy
Jan 27 15:36:13 compute-0 neutron-haproxy-ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96[249776]: [WARNING]  (249780) : Exiting Master process...
Jan 27 15:36:13 compute-0 neutron-haproxy-ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96[249776]: [ALERT]    (249780) : Current worker (249782) exited with code 143 (Terminated)
Jan 27 15:36:13 compute-0 neutron-haproxy-ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96[249776]: [WARNING]  (249780) : All workers exited. Exiting... (0)
Jan 27 15:36:13 compute-0 systemd[1]: libpod-dc136b288a704854a2d601a98e8d321182302455af5b37e2111609da364d8889.scope: Deactivated successfully.
Jan 27 15:36:13 compute-0 podman[249978]: 2026-01-27 15:36:13.7235887 +0000 UTC m=+0.052661540 container died dc136b288a704854a2d601a98e8d321182302455af5b37e2111609da364d8889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 15:36:13 compute-0 kernel: tap11135ab8-79: entered promiscuous mode
Jan 27 15:36:13 compute-0 NetworkManager[56090]: <info>  [1769528173.7405] manager: (tap11135ab8-79): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.742 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:13 compute-0 ovn_controller[97541]: 2026-01-27T15:36:13Z|00092|binding|INFO|Claiming lport 11135ab8-7999-42aa-8036-2c6b47a82768 for this chassis.
Jan 27 15:36:13 compute-0 ovn_controller[97541]: 2026-01-27T15:36:13Z|00093|binding|INFO|11135ab8-7999-42aa-8036-2c6b47a82768: Claiming fa:16:3e:86:78:0f 10.100.0.10
Jan 27 15:36:13 compute-0 kernel: tap11135ab8-79 (unregistering): left promiscuous mode
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.762 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:78:0f 10.100.0.10'], port_security=['fa:16:3e:86:78:0f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6c1eac15-4acf-423d-817f-805a374bb405', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d02266ee-be50-465e-a4c8-fe7fe20c6f96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '872630f403b24cda8e3ab59acbe33b66', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3d8c547c-c22d-49d2-bec0-08b83395a404', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7549cdd9-81a6-48c3-b592-75b552935131, chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=11135ab8-7999-42aa-8036-2c6b47a82768) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.777 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc136b288a704854a2d601a98e8d321182302455af5b37e2111609da364d8889-userdata-shm.mount: Deactivated successfully.
Jan 27 15:36:13 compute-0 ovn_controller[97541]: 2026-01-27T15:36:13Z|00094|binding|INFO|Setting lport 11135ab8-7999-42aa-8036-2c6b47a82768 ovn-installed in OVS
Jan 27 15:36:13 compute-0 ovn_controller[97541]: 2026-01-27T15:36:13Z|00095|binding|INFO|Setting lport 11135ab8-7999-42aa-8036-2c6b47a82768 up in Southbound
Jan 27 15:36:13 compute-0 ovn_controller[97541]: 2026-01-27T15:36:13Z|00096|binding|INFO|Releasing lport 11135ab8-7999-42aa-8036-2c6b47a82768 from this chassis (sb_readonly=1)
Jan 27 15:36:13 compute-0 ovn_controller[97541]: 2026-01-27T15:36:13Z|00097|if_status|INFO|Not setting lport 11135ab8-7999-42aa-8036-2c6b47a82768 down as sb is readonly
Jan 27 15:36:13 compute-0 ovn_controller[97541]: 2026-01-27T15:36:13Z|00098|binding|INFO|Releasing lport 11135ab8-7999-42aa-8036-2c6b47a82768 from this chassis (sb_readonly=0)
Jan 27 15:36:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b044cee2cbdcaeb81072e173bbcb5c7a4a42c30fa38edcaeacd62662b895d394-merged.mount: Deactivated successfully.
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.782 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:13 compute-0 ovn_controller[97541]: 2026-01-27T15:36:13Z|00099|binding|INFO|Removing iface tap11135ab8-79 ovn-installed in OVS
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.794 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:78:0f 10.100.0.10'], port_security=['fa:16:3e:86:78:0f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6c1eac15-4acf-423d-817f-805a374bb405', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d02266ee-be50-465e-a4c8-fe7fe20c6f96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '872630f403b24cda8e3ab59acbe33b66', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3d8c547c-c22d-49d2-bec0-08b83395a404', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7549cdd9-81a6-48c3-b592-75b552935131, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=11135ab8-7999-42aa-8036-2c6b47a82768) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:36:13 compute-0 ovn_controller[97541]: 2026-01-27T15:36:13Z|00100|binding|INFO|Setting lport 11135ab8-7999-42aa-8036-2c6b47a82768 down in Southbound
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.797 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.800 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:13 compute-0 podman[249978]: 2026-01-27 15:36:13.805740538 +0000 UTC m=+0.134813378 container cleanup dc136b288a704854a2d601a98e8d321182302455af5b37e2111609da364d8889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.811 185195 INFO nova.virt.libvirt.driver [-] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Instance destroyed successfully.
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.811 185195 DEBUG nova.objects.instance [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lazy-loading 'resources' on Instance uuid 6c1eac15-4acf-423d-817f-805a374bb405 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:36:13 compute-0 systemd[1]: libpod-conmon-dc136b288a704854a2d601a98e8d321182302455af5b37e2111609da364d8889.scope: Deactivated successfully.
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.830 185195 DEBUG nova.virt.libvirt.vif [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:35:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1753998192',display_name='tempest-ServersTestManualDisk-server-1753998192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1753998192',id=8,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO7iZFgTagHGhdMoGRpAvvVWrZ6SDns1JkVmgs4rz+kHZN+1VJdDk1Lqzdx0u3ZQt32yWI9Aa5KzVBclBwn/lqa8OUPqiskz4nKannJLUhdZDRmEZfBSllbq957QrkWsDA==',key_name='tempest-keypair-1572229790',keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:36:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='872630f403b24cda8e3ab59acbe33b66',ramdisk_id='',reservation_id='r-gf7s370k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1680033429',owner_user_name='tempest-ServersTestManualDisk-1680033429-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T15:36:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e4d1728be0c14934b0fb170d90f2cf80',uuid=6c1eac15-4acf-423d-817f-805a374bb405,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11135ab8-7999-42aa-8036-2c6b47a82768", "address": "fa:16:3e:86:78:0f", "network": {"id": "d02266ee-be50-465e-a4c8-fe7fe20c6f96", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-744390261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "872630f403b24cda8e3ab59acbe33b66", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11135ab8-79", "ovs_interfaceid": "11135ab8-7999-42aa-8036-2c6b47a82768", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.831 185195 DEBUG nova.network.os_vif_util [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Converting VIF {"id": "11135ab8-7999-42aa-8036-2c6b47a82768", "address": "fa:16:3e:86:78:0f", "network": {"id": "d02266ee-be50-465e-a4c8-fe7fe20c6f96", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-744390261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "872630f403b24cda8e3ab59acbe33b66", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11135ab8-79", "ovs_interfaceid": "11135ab8-7999-42aa-8036-2c6b47a82768", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.831 185195 DEBUG nova.network.os_vif_util [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:78:0f,bridge_name='br-int',has_traffic_filtering=True,id=11135ab8-7999-42aa-8036-2c6b47a82768,network=Network(d02266ee-be50-465e-a4c8-fe7fe20c6f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11135ab8-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.832 185195 DEBUG os_vif [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:78:0f,bridge_name='br-int',has_traffic_filtering=True,id=11135ab8-7999-42aa-8036-2c6b47a82768,network=Network(d02266ee-be50-465e-a4c8-fe7fe20c6f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11135ab8-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.833 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.833 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11135ab8-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.835 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.836 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.838 185195 INFO os_vif [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:78:0f,bridge_name='br-int',has_traffic_filtering=True,id=11135ab8-7999-42aa-8036-2c6b47a82768,network=Network(d02266ee-be50-465e-a4c8-fe7fe20c6f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11135ab8-79')
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.839 185195 INFO nova.virt.libvirt.driver [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Deleting instance files /var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405_del
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.840 185195 INFO nova.virt.libvirt.driver [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Deletion of /var/lib/nova/instances/6c1eac15-4acf-423d-817f-805a374bb405_del complete
Jan 27 15:36:13 compute-0 podman[250017]: 2026-01-27 15:36:13.886521199 +0000 UTC m=+0.056328208 container remove dc136b288a704854a2d601a98e8d321182302455af5b37e2111609da364d8889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.893 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[08915370-18a1-46bf-aee7-80cca457a9ec]: (4, ('Tue Jan 27 03:36:13 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96 (dc136b288a704854a2d601a98e8d321182302455af5b37e2111609da364d8889)\ndc136b288a704854a2d601a98e8d321182302455af5b37e2111609da364d8889\nTue Jan 27 03:36:13 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96 (dc136b288a704854a2d601a98e8d321182302455af5b37e2111609da364d8889)\ndc136b288a704854a2d601a98e8d321182302455af5b37e2111609da364d8889\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.896 185195 INFO nova.compute.manager [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Took 0.38 seconds to destroy the instance on the hypervisor.
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.897 185195 DEBUG oslo.service.loopingcall [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.897 185195 DEBUG nova.compute.manager [-] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.897 185195 DEBUG nova.network.neutron [-] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.897 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0187ce-ecc0-464f-939c-3cb70bd60a3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.899 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd02266ee-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:36:13 compute-0 kernel: tapd02266ee-b0: left promiscuous mode
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.902 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:13 compute-0 nova_compute[185191]: 2026-01-27 15:36:13.913 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.917 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ad739300-f654-46f3-a9c3-759dd8552f7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.932 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[970b053f-5e2c-4de2-aeb4-e45a4f132167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.933 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2a36d4-41b3-4e7e-805c-1f507d9e078e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.948 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef9f742-55c8-48cc-8aa0-4dfb8014b8f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573261, 'reachable_time': 22430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250030, 'error': None, 'target': 'ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:13 compute-0 systemd[1]: run-netns-ovnmeta\x2dd02266ee\x2dbe50\x2d465e\x2da4c8\x2dfe7fe20c6f96.mount: Deactivated successfully.
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.955 107308 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d02266ee-be50-465e-a4c8-fe7fe20c6f96 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.955 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[802572f6-90a7-4019-935b-3ceff4eb8eee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.956 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 11135ab8-7999-42aa-8036-2c6b47a82768 in datapath d02266ee-be50-465e-a4c8-fe7fe20c6f96 unbound from our chassis
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.957 106793 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d02266ee-be50-465e-a4c8-fe7fe20c6f96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.958 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec19590-ec83-4b6c-955e-64d85232f88f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.959 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 11135ab8-7999-42aa-8036-2c6b47a82768 in datapath d02266ee-be50-465e-a4c8-fe7fe20c6f96 unbound from our chassis
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.960 106793 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d02266ee-be50-465e-a4c8-fe7fe20c6f96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 15:36:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:13.961 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e5836b0e-1ed5-48d6-9398-5e8bc1bfdede]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:14 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:14.385 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:36:14 compute-0 nova_compute[185191]: 2026-01-27 15:36:14.477 185195 DEBUG nova.compute.manager [req-fa022847-2268-42d6-89f1-5ad12feaefc4 req-fff1fe5a-b34d-44cd-a427-ba8ef33e0344 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Received event network-vif-unplugged-11135ab8-7999-42aa-8036-2c6b47a82768 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:36:14 compute-0 nova_compute[185191]: 2026-01-27 15:36:14.477 185195 DEBUG oslo_concurrency.lockutils [req-fa022847-2268-42d6-89f1-5ad12feaefc4 req-fff1fe5a-b34d-44cd-a427-ba8ef33e0344 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "6c1eac15-4acf-423d-817f-805a374bb405-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:14 compute-0 nova_compute[185191]: 2026-01-27 15:36:14.478 185195 DEBUG oslo_concurrency.lockutils [req-fa022847-2268-42d6-89f1-5ad12feaefc4 req-fff1fe5a-b34d-44cd-a427-ba8ef33e0344 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:14 compute-0 nova_compute[185191]: 2026-01-27 15:36:14.478 185195 DEBUG oslo_concurrency.lockutils [req-fa022847-2268-42d6-89f1-5ad12feaefc4 req-fff1fe5a-b34d-44cd-a427-ba8ef33e0344 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:14 compute-0 nova_compute[185191]: 2026-01-27 15:36:14.478 185195 DEBUG nova.compute.manager [req-fa022847-2268-42d6-89f1-5ad12feaefc4 req-fff1fe5a-b34d-44cd-a427-ba8ef33e0344 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] No waiting events found dispatching network-vif-unplugged-11135ab8-7999-42aa-8036-2c6b47a82768 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:36:14 compute-0 nova_compute[185191]: 2026-01-27 15:36:14.479 185195 DEBUG nova.compute.manager [req-fa022847-2268-42d6-89f1-5ad12feaefc4 req-fff1fe5a-b34d-44cd-a427-ba8ef33e0344 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Received event network-vif-unplugged-11135ab8-7999-42aa-8036-2c6b47a82768 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 15:36:14 compute-0 nova_compute[185191]: 2026-01-27 15:36:14.785 185195 DEBUG nova.network.neutron [req-9e38f5c4-d5d3-47b8-a36f-a1c3655aacae req-fbf458be-b8b0-4948-b58d-7246979763cd 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Updated VIF entry in instance network info cache for port 11135ab8-7999-42aa-8036-2c6b47a82768. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:36:14 compute-0 nova_compute[185191]: 2026-01-27 15:36:14.786 185195 DEBUG nova.network.neutron [req-9e38f5c4-d5d3-47b8-a36f-a1c3655aacae req-fbf458be-b8b0-4948-b58d-7246979763cd 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Updating instance_info_cache with network_info: [{"id": "11135ab8-7999-42aa-8036-2c6b47a82768", "address": "fa:16:3e:86:78:0f", "network": {"id": "d02266ee-be50-465e-a4c8-fe7fe20c6f96", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-744390261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "872630f403b24cda8e3ab59acbe33b66", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11135ab8-79", "ovs_interfaceid": "11135ab8-7999-42aa-8036-2c6b47a82768", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:36:14 compute-0 nova_compute[185191]: 2026-01-27 15:36:14.807 185195 DEBUG oslo_concurrency.lockutils [req-9e38f5c4-d5d3-47b8-a36f-a1c3655aacae req-fbf458be-b8b0-4948-b58d-7246979763cd 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-6c1eac15-4acf-423d-817f-805a374bb405" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:36:15 compute-0 nova_compute[185191]: 2026-01-27 15:36:15.950 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:16 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:16.268 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:36:16 compute-0 nova_compute[185191]: 2026-01-27 15:36:16.728 185195 DEBUG nova.compute.manager [req-3ce0a005-be96-4d10-a872-ecf4c63dee34 req-3444043f-ac12-4a60-a146-7cb06af0de3a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Received event network-vif-plugged-11135ab8-7999-42aa-8036-2c6b47a82768 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:36:16 compute-0 nova_compute[185191]: 2026-01-27 15:36:16.729 185195 DEBUG oslo_concurrency.lockutils [req-3ce0a005-be96-4d10-a872-ecf4c63dee34 req-3444043f-ac12-4a60-a146-7cb06af0de3a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "6c1eac15-4acf-423d-817f-805a374bb405-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:16 compute-0 nova_compute[185191]: 2026-01-27 15:36:16.729 185195 DEBUG oslo_concurrency.lockutils [req-3ce0a005-be96-4d10-a872-ecf4c63dee34 req-3444043f-ac12-4a60-a146-7cb06af0de3a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:16 compute-0 nova_compute[185191]: 2026-01-27 15:36:16.729 185195 DEBUG oslo_concurrency.lockutils [req-3ce0a005-be96-4d10-a872-ecf4c63dee34 req-3444043f-ac12-4a60-a146-7cb06af0de3a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:16 compute-0 nova_compute[185191]: 2026-01-27 15:36:16.729 185195 DEBUG nova.compute.manager [req-3ce0a005-be96-4d10-a872-ecf4c63dee34 req-3444043f-ac12-4a60-a146-7cb06af0de3a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] No waiting events found dispatching network-vif-plugged-11135ab8-7999-42aa-8036-2c6b47a82768 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:36:16 compute-0 nova_compute[185191]: 2026-01-27 15:36:16.730 185195 WARNING nova.compute.manager [req-3ce0a005-be96-4d10-a872-ecf4c63dee34 req-3444043f-ac12-4a60-a146-7cb06af0de3a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Received unexpected event network-vif-plugged-11135ab8-7999-42aa-8036-2c6b47a82768 for instance with vm_state active and task_state deleting.
Jan 27 15:36:16 compute-0 nova_compute[185191]: 2026-01-27 15:36:16.730 185195 DEBUG nova.compute.manager [req-3ce0a005-be96-4d10-a872-ecf4c63dee34 req-3444043f-ac12-4a60-a146-7cb06af0de3a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Received event network-vif-plugged-11135ab8-7999-42aa-8036-2c6b47a82768 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:36:16 compute-0 nova_compute[185191]: 2026-01-27 15:36:16.730 185195 DEBUG oslo_concurrency.lockutils [req-3ce0a005-be96-4d10-a872-ecf4c63dee34 req-3444043f-ac12-4a60-a146-7cb06af0de3a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "6c1eac15-4acf-423d-817f-805a374bb405-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:16 compute-0 nova_compute[185191]: 2026-01-27 15:36:16.730 185195 DEBUG oslo_concurrency.lockutils [req-3ce0a005-be96-4d10-a872-ecf4c63dee34 req-3444043f-ac12-4a60-a146-7cb06af0de3a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:16 compute-0 nova_compute[185191]: 2026-01-27 15:36:16.730 185195 DEBUG oslo_concurrency.lockutils [req-3ce0a005-be96-4d10-a872-ecf4c63dee34 req-3444043f-ac12-4a60-a146-7cb06af0de3a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:16 compute-0 nova_compute[185191]: 2026-01-27 15:36:16.731 185195 DEBUG nova.compute.manager [req-3ce0a005-be96-4d10-a872-ecf4c63dee34 req-3444043f-ac12-4a60-a146-7cb06af0de3a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] No waiting events found dispatching network-vif-plugged-11135ab8-7999-42aa-8036-2c6b47a82768 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:36:16 compute-0 nova_compute[185191]: 2026-01-27 15:36:16.731 185195 WARNING nova.compute.manager [req-3ce0a005-be96-4d10-a872-ecf4c63dee34 req-3444043f-ac12-4a60-a146-7cb06af0de3a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Received unexpected event network-vif-plugged-11135ab8-7999-42aa-8036-2c6b47a82768 for instance with vm_state active and task_state deleting.
Jan 27 15:36:16 compute-0 nova_compute[185191]: 2026-01-27 15:36:16.837 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:17 compute-0 nova_compute[185191]: 2026-01-27 15:36:17.055 185195 DEBUG nova.network.neutron [-] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:36:17 compute-0 nova_compute[185191]: 2026-01-27 15:36:17.181 185195 INFO nova.compute.manager [-] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Took 3.28 seconds to deallocate network for instance.
Jan 27 15:36:17 compute-0 nova_compute[185191]: 2026-01-27 15:36:17.185 185195 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769528162.181458, eae5a95c-09c0-4c0b-ae8f-3ab2659972b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:36:17 compute-0 nova_compute[185191]: 2026-01-27 15:36:17.185 185195 INFO nova.compute.manager [-] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] VM Stopped (Lifecycle Event)
Jan 27 15:36:17 compute-0 podman[250031]: 2026-01-27 15:36:17.308319775 +0000 UTC m=+0.067792955 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 15:36:17 compute-0 nova_compute[185191]: 2026-01-27 15:36:17.713 185195 DEBUG nova.compute.manager [None req-2ab331bb-4408-439b-b797-3257e2a2d328 - - - - - -] [instance: eae5a95c-09c0-4c0b-ae8f-3ab2659972b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:36:17 compute-0 nova_compute[185191]: 2026-01-27 15:36:17.723 185195 DEBUG oslo_concurrency.lockutils [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:17 compute-0 nova_compute[185191]: 2026-01-27 15:36:17.723 185195 DEBUG oslo_concurrency.lockutils [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:17 compute-0 nova_compute[185191]: 2026-01-27 15:36:17.826 185195 DEBUG nova.compute.provider_tree [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:36:17 compute-0 nova_compute[185191]: 2026-01-27 15:36:17.875 185195 DEBUG nova.scheduler.client.report [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:36:18 compute-0 nova_compute[185191]: 2026-01-27 15:36:18.075 185195 DEBUG oslo_concurrency.lockutils [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:18 compute-0 nova_compute[185191]: 2026-01-27 15:36:18.117 185195 INFO nova.scheduler.client.report [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Deleted allocations for instance 6c1eac15-4acf-423d-817f-805a374bb405
Jan 27 15:36:18 compute-0 nova_compute[185191]: 2026-01-27 15:36:18.250 185195 DEBUG oslo_concurrency.lockutils [None req-6e59b34c-eb74-46f3-9fd6-be39f3e2cc92 e4d1728be0c14934b0fb170d90f2cf80 872630f403b24cda8e3ab59acbe33b66 - - default default] Lock "6c1eac15-4acf-423d-817f-805a374bb405" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:18 compute-0 ovn_controller[97541]: 2026-01-27T15:36:18Z|00101|binding|INFO|Releasing lport 09357bac-861f-495f-9fcb-374ff41c059c from this chassis (sb_readonly=0)
Jan 27 15:36:18 compute-0 nova_compute[185191]: 2026-01-27 15:36:18.680 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:18 compute-0 nova_compute[185191]: 2026-01-27 15:36:18.836 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:19 compute-0 nova_compute[185191]: 2026-01-27 15:36:19.176 185195 DEBUG nova.compute.manager [req-f095e67f-079f-4cc9-ad6a-ec1218a63c1b req-39e37c74-a153-426d-bfe0-28798061d86b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Received event network-vif-deleted-11135ab8-7999-42aa-8036-2c6b47a82768 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:36:20 compute-0 podman[250052]: 2026-01-27 15:36:20.323496342 +0000 UTC m=+0.074816993 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:36:20 compute-0 podman[250051]: 2026-01-27 15:36:20.343596159 +0000 UTC m=+0.094963431 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, name=ubi9, config_id=kepler, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-container)
Jan 27 15:36:20 compute-0 nova_compute[185191]: 2026-01-27 15:36:20.952 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:23 compute-0 nova_compute[185191]: 2026-01-27 15:36:23.838 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:25 compute-0 podman[250093]: 2026-01-27 15:36:25.312972299 +0000 UTC m=+0.068952506 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:36:25 compute-0 nova_compute[185191]: 2026-01-27 15:36:25.954 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:27 compute-0 ovn_controller[97541]: 2026-01-27T15:36:27Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:55:96 10.100.0.6
Jan 27 15:36:27 compute-0 ovn_controller[97541]: 2026-01-27T15:36:27Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:55:96 10.100.0.6
Jan 27 15:36:28 compute-0 nova_compute[185191]: 2026-01-27 15:36:28.302 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:28 compute-0 nova_compute[185191]: 2026-01-27 15:36:28.809 185195 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769528173.8074358, 6c1eac15-4acf-423d-817f-805a374bb405 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:36:28 compute-0 nova_compute[185191]: 2026-01-27 15:36:28.809 185195 INFO nova.compute.manager [-] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] VM Stopped (Lifecycle Event)
Jan 27 15:36:28 compute-0 nova_compute[185191]: 2026-01-27 15:36:28.836 185195 DEBUG nova.compute.manager [None req-3a4044fa-3913-4876-b148-7bc4fdb0def5 - - - - - -] [instance: 6c1eac15-4acf-423d-817f-805a374bb405] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:36:28 compute-0 nova_compute[185191]: 2026-01-27 15:36:28.841 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:29 compute-0 podman[201073]: time="2026-01-27T15:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:36:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:36:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4390 "" "Go-http-client/1.1"
Jan 27 15:36:30 compute-0 nova_compute[185191]: 2026-01-27 15:36:30.956 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:30 compute-0 ovn_controller[97541]: 2026-01-27T15:36:30Z|00102|binding|INFO|Releasing lport 09357bac-861f-495f-9fcb-374ff41c059c from this chassis (sb_readonly=0)
Jan 27 15:36:31 compute-0 nova_compute[185191]: 2026-01-27 15:36:31.036 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:31 compute-0 openstack_network_exporter[204239]: ERROR   15:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:36:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:36:31 compute-0 openstack_network_exporter[204239]: ERROR   15:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:36:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:36:33 compute-0 nova_compute[185191]: 2026-01-27 15:36:33.843 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:34 compute-0 nova_compute[185191]: 2026-01-27 15:36:34.250 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:34 compute-0 nova_compute[185191]: 2026-01-27 15:36:34.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:36:34 compute-0 nova_compute[185191]: 2026-01-27 15:36:34.992 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:34 compute-0 nova_compute[185191]: 2026-01-27 15:36:34.992 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:34 compute-0 nova_compute[185191]: 2026-01-27 15:36:34.992 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:34 compute-0 nova_compute[185191]: 2026-01-27 15:36:34.993 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:36:35 compute-0 nova_compute[185191]: 2026-01-27 15:36:35.111 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:36:35 compute-0 nova_compute[185191]: 2026-01-27 15:36:35.179 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:36:35 compute-0 nova_compute[185191]: 2026-01-27 15:36:35.180 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:36:35 compute-0 nova_compute[185191]: 2026-01-27 15:36:35.244 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:36:35 compute-0 nova_compute[185191]: 2026-01-27 15:36:35.583 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:36:35 compute-0 nova_compute[185191]: 2026-01-27 15:36:35.584 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5105MB free_disk=72.34925842285156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:36:35 compute-0 nova_compute[185191]: 2026-01-27 15:36:35.584 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:35 compute-0 nova_compute[185191]: 2026-01-27 15:36:35.585 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:35 compute-0 nova_compute[185191]: 2026-01-27 15:36:35.680 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance b4f95e32-4dde-475f-bf71-8bd9391938a2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:36:35 compute-0 nova_compute[185191]: 2026-01-27 15:36:35.680 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:36:35 compute-0 nova_compute[185191]: 2026-01-27 15:36:35.680 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:36:35 compute-0 nova_compute[185191]: 2026-01-27 15:36:35.754 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:36:35 compute-0 nova_compute[185191]: 2026-01-27 15:36:35.774 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:36:35 compute-0 nova_compute[185191]: 2026-01-27 15:36:35.808 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:36:35 compute-0 nova_compute[185191]: 2026-01-27 15:36:35.808 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:35 compute-0 nova_compute[185191]: 2026-01-27 15:36:35.960 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:37 compute-0 podman[250140]: 2026-01-27 15:36:37.306289841 +0000 UTC m=+0.065695718 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 27 15:36:38 compute-0 nova_compute[185191]: 2026-01-27 15:36:38.846 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:40 compute-0 podman[250157]: 2026-01-27 15:36:40.318552861 +0000 UTC m=+0.071549125 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, managed_by=edpm_ansible, org.label-schema.build-date=20260126, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:36:40 compute-0 podman[250159]: 2026-01-27 15:36:40.328529138 +0000 UTC m=+0.071722230 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 27 15:36:40 compute-0 podman[250158]: 2026-01-27 15:36:40.379303027 +0000 UTC m=+0.128387556 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:36:40 compute-0 nova_compute[185191]: 2026-01-27 15:36:40.962 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:41 compute-0 nova_compute[185191]: 2026-01-27 15:36:41.546 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:42 compute-0 nova_compute[185191]: 2026-01-27 15:36:42.808 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:36:42 compute-0 nova_compute[185191]: 2026-01-27 15:36:42.821 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Acquiring lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:42 compute-0 nova_compute[185191]: 2026-01-27 15:36:42.821 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:42 compute-0 nova_compute[185191]: 2026-01-27 15:36:42.857 185195 DEBUG nova.compute.manager [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 15:36:42 compute-0 nova_compute[185191]: 2026-01-27 15:36:42.956 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:42 compute-0 nova_compute[185191]: 2026-01-27 15:36:42.957 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:42 compute-0 nova_compute[185191]: 2026-01-27 15:36:42.975 185195 DEBUG nova.virt.hardware [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 15:36:42 compute-0 nova_compute[185191]: 2026-01-27 15:36:42.975 185195 INFO nova.compute.claims [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Claim successful on node compute-0.ctlplane.example.com
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.169 185195 DEBUG nova.compute.provider_tree [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.192 185195 DEBUG nova.scheduler.client.report [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.225 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.226 185195 DEBUG nova.compute.manager [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.296 185195 DEBUG nova.compute.manager [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.296 185195 DEBUG nova.network.neutron [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.319 185195 INFO nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.347 185195 DEBUG nova.compute.manager [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.466 185195 DEBUG nova.compute.manager [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.467 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.468 185195 INFO nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Creating image(s)
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.468 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Acquiring lock "/var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.469 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "/var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.470 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "/var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.484 185195 DEBUG oslo_concurrency.processutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.557 185195 DEBUG oslo_concurrency.processutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.558 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Acquiring lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.559 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.570 185195 DEBUG oslo_concurrency.processutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.639 185195 DEBUG oslo_concurrency.processutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.640 185195 DEBUG oslo_concurrency.processutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024,backing_fmt=raw /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.723 185195 DEBUG oslo_concurrency.processutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024,backing_fmt=raw /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk 1073741824" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.725 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.725 185195 DEBUG oslo_concurrency.processutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.774 185195 DEBUG nova.policy [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37fdc28d88dc42689e835e91aad4c2d3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '85bd0617549142039dbe55541a8fece5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.796 185195 DEBUG oslo_concurrency.processutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.797 185195 DEBUG nova.virt.disk.api [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Checking if we can resize image /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.797 185195 DEBUG oslo_concurrency.processutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.848 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.868 185195 DEBUG oslo_concurrency.processutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.869 185195 DEBUG nova.virt.disk.api [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Cannot resize image /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.869 185195 DEBUG nova.objects.instance [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lazy-loading 'migration_context' on Instance uuid 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.942 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.943 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Ensure instance console log exists: /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.943 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.944 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:43 compute-0 nova_compute[185191]: 2026-01-27 15:36:43.945 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:44 compute-0 nova_compute[185191]: 2026-01-27 15:36:44.938 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:36:45 compute-0 nova_compute[185191]: 2026-01-27 15:36:45.328 185195 DEBUG nova.network.neutron [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Successfully created port: c4e14112-ad85-4d49-92a0-fa577e5760f3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 15:36:45 compute-0 nova_compute[185191]: 2026-01-27 15:36:45.938 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:36:45 compute-0 nova_compute[185191]: 2026-01-27 15:36:45.965 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:46 compute-0 nova_compute[185191]: 2026-01-27 15:36:46.006 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:36:46 compute-0 nova_compute[185191]: 2026-01-27 15:36:46.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:36:47 compute-0 sshd-session[250235]: Invalid user sol from 2.57.122.238 port 56440
Jan 27 15:36:47 compute-0 nova_compute[185191]: 2026-01-27 15:36:47.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:36:47 compute-0 nova_compute[185191]: 2026-01-27 15:36:47.943 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:36:47 compute-0 sshd-session[250235]: Connection closed by invalid user sol 2.57.122.238 port 56440 [preauth]
Jan 27 15:36:47 compute-0 podman[250237]: 2026-01-27 15:36:47.960090001 +0000 UTC m=+0.108763891 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 27 15:36:48 compute-0 nova_compute[185191]: 2026-01-27 15:36:48.418 185195 DEBUG nova.network.neutron [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Successfully updated port: c4e14112-ad85-4d49-92a0-fa577e5760f3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 15:36:48 compute-0 nova_compute[185191]: 2026-01-27 15:36:48.555 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Acquiring lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:36:48 compute-0 nova_compute[185191]: 2026-01-27 15:36:48.555 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Acquired lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:36:48 compute-0 nova_compute[185191]: 2026-01-27 15:36:48.556 185195 DEBUG nova.network.neutron [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:36:48 compute-0 nova_compute[185191]: 2026-01-27 15:36:48.774 185195 DEBUG nova.compute.manager [req-a417b11e-88ab-4a8c-b584-1adb3d2a7b62 req-f55498c1-39c9-4e26-ab5a-1e8bdd2d0212 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received event network-changed-c4e14112-ad85-4d49-92a0-fa577e5760f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:36:48 compute-0 nova_compute[185191]: 2026-01-27 15:36:48.774 185195 DEBUG nova.compute.manager [req-a417b11e-88ab-4a8c-b584-1adb3d2a7b62 req-f55498c1-39c9-4e26-ab5a-1e8bdd2d0212 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Refreshing instance network info cache due to event network-changed-c4e14112-ad85-4d49-92a0-fa577e5760f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:36:48 compute-0 nova_compute[185191]: 2026-01-27 15:36:48.775 185195 DEBUG oslo_concurrency.lockutils [req-a417b11e-88ab-4a8c-b584-1adb3d2a7b62 req-f55498c1-39c9-4e26-ab5a-1e8bdd2d0212 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:36:48 compute-0 nova_compute[185191]: 2026-01-27 15:36:48.852 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:49 compute-0 nova_compute[185191]: 2026-01-27 15:36:49.478 185195 DEBUG nova.network.neutron [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 15:36:49 compute-0 nova_compute[185191]: 2026-01-27 15:36:49.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:36:49 compute-0 nova_compute[185191]: 2026-01-27 15:36:49.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:36:49 compute-0 nova_compute[185191]: 2026-01-27 15:36:49.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:36:49 compute-0 nova_compute[185191]: 2026-01-27 15:36:49.985 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 15:36:50 compute-0 nova_compute[185191]: 2026-01-27 15:36:50.320 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:36:50 compute-0 nova_compute[185191]: 2026-01-27 15:36:50.321 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:36:50 compute-0 nova_compute[185191]: 2026-01-27 15:36:50.321 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:36:50 compute-0 nova_compute[185191]: 2026-01-27 15:36:50.321 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b4f95e32-4dde-475f-bf71-8bd9391938a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:36:50 compute-0 nova_compute[185191]: 2026-01-27 15:36:50.968 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:51 compute-0 podman[250257]: 2026-01-27 15:36:51.323547396 +0000 UTC m=+0.066063238 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:36:51 compute-0 podman[250256]: 2026-01-27 15:36:51.329253829 +0000 UTC m=+0.078411909 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.expose-services=, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, config_id=kepler, managed_by=edpm_ansible, distribution-scope=public, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64)
Jan 27 15:36:51 compute-0 nova_compute[185191]: 2026-01-27 15:36:51.440 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Acquiring lock "93810d43-1793-46e8-871a-11bf4aa9a642" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:51 compute-0 nova_compute[185191]: 2026-01-27 15:36:51.441 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "93810d43-1793-46e8-871a-11bf4aa9a642" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:51 compute-0 nova_compute[185191]: 2026-01-27 15:36:51.719 185195 DEBUG nova.network.neutron [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Updating instance_info_cache with network_info: [{"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:36:51 compute-0 nova_compute[185191]: 2026-01-27 15:36:51.776 185195 DEBUG nova.compute.manager [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.083 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Releasing lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.084 185195 DEBUG nova.compute.manager [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Instance network_info: |[{"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.084 185195 DEBUG oslo_concurrency.lockutils [req-a417b11e-88ab-4a8c-b584-1adb3d2a7b62 req-f55498c1-39c9-4e26-ab5a-1e8bdd2d0212 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.085 185195 DEBUG nova.network.neutron [req-a417b11e-88ab-4a8c-b584-1adb3d2a7b62 req-f55498c1-39c9-4e26-ab5a-1e8bdd2d0212 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Refreshing network info cache for port c4e14112-ad85-4d49-92a0-fa577e5760f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.089 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Start _get_guest_xml network_info=[{"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:34:19Z,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:34:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': 'fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.097 185195 WARNING nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.104 185195 DEBUG nova.virt.libvirt.host [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.105 185195 DEBUG nova.virt.libvirt.host [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.110 185195 DEBUG nova.virt.libvirt.host [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.111 185195 DEBUG nova.virt.libvirt.host [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.111 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.111 185195 DEBUG nova.virt.hardware [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T15:34:18Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='aed09843-3292-40b2-b829-c4ed118e135f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:34:19Z,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:34:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.112 185195 DEBUG nova.virt.hardware [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.112 185195 DEBUG nova.virt.hardware [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.112 185195 DEBUG nova.virt.hardware [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.113 185195 DEBUG nova.virt.hardware [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.113 185195 DEBUG nova.virt.hardware [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.113 185195 DEBUG nova.virt.hardware [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.113 185195 DEBUG nova.virt.hardware [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.113 185195 DEBUG nova.virt.hardware [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.114 185195 DEBUG nova.virt.hardware [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.114 185195 DEBUG nova.virt.hardware [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.117 185195 DEBUG nova.virt.libvirt.vif [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:36:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1366686872',display_name='tempest-ServerActionsTestJSON-server-1366686872',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1366686872',id=9,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOOG6+xlzVgXR460fH4wCBDfSZ+Bqzod+T+TwINETdjxfNX82OuoN42CwFP5m4Wq/GmFxEISV/cN9fFUJXMVe/yMQysH0bvTBYF3s0nsMzz6e7cmVx9K1BA1d07EqkEl/g==',key_name='tempest-keypair-1606409313',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='85bd0617549142039dbe55541a8fece5',ramdisk_id='',reservation_id='r-fqvroo3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1260809908',owner_user_name='tempest-ServerActionsTestJSON-1260809908-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:36:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37fdc28d88dc42689e835e91aad4c2d3',uuid=2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.118 185195 DEBUG nova.network.os_vif_util [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Converting VIF {"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.118 185195 DEBUG nova.network.os_vif_util [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:a8:c7,bridge_name='br-int',has_traffic_filtering=True,id=c4e14112-ad85-4d49-92a0-fa577e5760f3,network=Network(48bde8d1-e906-4909-996e-97d5280dcfb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e14112-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.119 185195 DEBUG nova.objects.instance [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.192 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.192 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.201 185195 DEBUG nova.virt.hardware [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.201 185195 INFO nova.compute.claims [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Claim successful on node compute-0.ctlplane.example.com
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.401 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] End _get_guest_xml xml=<domain type="kvm">
Jan 27 15:36:52 compute-0 nova_compute[185191]:   <uuid>2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a</uuid>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   <name>instance-00000009</name>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   <memory>131072</memory>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   <vcpu>1</vcpu>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   <metadata>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <nova:name>tempest-ServerActionsTestJSON-server-1366686872</nova:name>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <nova:creationTime>2026-01-27 15:36:52</nova:creationTime>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <nova:flavor name="m1.nano">
Jan 27 15:36:52 compute-0 nova_compute[185191]:         <nova:memory>128</nova:memory>
Jan 27 15:36:52 compute-0 nova_compute[185191]:         <nova:disk>1</nova:disk>
Jan 27 15:36:52 compute-0 nova_compute[185191]:         <nova:swap>0</nova:swap>
Jan 27 15:36:52 compute-0 nova_compute[185191]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 15:36:52 compute-0 nova_compute[185191]:         <nova:vcpus>1</nova:vcpus>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       </nova:flavor>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <nova:owner>
Jan 27 15:36:52 compute-0 nova_compute[185191]:         <nova:user uuid="37fdc28d88dc42689e835e91aad4c2d3">tempest-ServerActionsTestJSON-1260809908-project-member</nova:user>
Jan 27 15:36:52 compute-0 nova_compute[185191]:         <nova:project uuid="85bd0617549142039dbe55541a8fece5">tempest-ServerActionsTestJSON-1260809908</nova:project>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       </nova:owner>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <nova:root type="image" uuid="fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <nova:ports>
Jan 27 15:36:52 compute-0 nova_compute[185191]:         <nova:port uuid="c4e14112-ad85-4d49-92a0-fa577e5760f3">
Jan 27 15:36:52 compute-0 nova_compute[185191]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:         </nova:port>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       </nova:ports>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     </nova:instance>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   </metadata>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   <sysinfo type="smbios">
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <system>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <entry name="manufacturer">RDO</entry>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <entry name="product">OpenStack Compute</entry>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <entry name="serial">2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a</entry>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <entry name="uuid">2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a</entry>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <entry name="family">Virtual Machine</entry>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     </system>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   </sysinfo>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   <os>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <boot dev="hd"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <smbios mode="sysinfo"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   </os>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   <features>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <acpi/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <apic/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <vmcoreinfo/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   </features>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   <clock offset="utc">
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <timer name="hpet" present="no"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   </clock>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   <cpu mode="host-model" match="exact">
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <target dev="vda" bus="virtio"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <disk type="file" device="cdrom">
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.config"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <target dev="sda" bus="sata"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <interface type="ethernet">
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <mac address="fa:16:3e:a4:a8:c7"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <mtu size="1442"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <target dev="tapc4e14112-ad"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <serial type="pty">
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <log file="/var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/console.log" append="off"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     </serial>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <video>
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     </video>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <input type="tablet" bus="usb"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <rng model="virtio">
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <backend model="random">/dev/urandom</backend>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <controller type="usb" index="0"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     <memballoon model="virtio">
Jan 27 15:36:52 compute-0 nova_compute[185191]:       <stats period="10"/>
Jan 27 15:36:52 compute-0 nova_compute[185191]:     </memballoon>
Jan 27 15:36:52 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:36:52 compute-0 nova_compute[185191]: </domain>
Jan 27 15:36:52 compute-0 nova_compute[185191]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.401 185195 DEBUG nova.compute.manager [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Preparing to wait for external event network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.401 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Acquiring lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.402 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.403 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.403 185195 DEBUG nova.virt.libvirt.vif [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:36:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1366686872',display_name='tempest-ServerActionsTestJSON-server-1366686872',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1366686872',id=9,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOOG6+xlzVgXR460fH4wCBDfSZ+Bqzod+T+TwINETdjxfNX82OuoN42CwFP5m4Wq/GmFxEISV/cN9fFUJXMVe/yMQysH0bvTBYF3s0nsMzz6e7cmVx9K1BA1d07EqkEl/g==',key_name='tempest-keypair-1606409313',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='85bd0617549142039dbe55541a8fece5',ramdisk_id='',reservation_id='r-fqvroo3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1260809908',owner_user_name='tempest-ServerActionsTestJSON-1260809908-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:36:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37fdc28d88dc42689e835e91aad4c2d3',uuid=2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.404 185195 DEBUG nova.network.os_vif_util [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Converting VIF {"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.405 185195 DEBUG nova.network.os_vif_util [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:a8:c7,bridge_name='br-int',has_traffic_filtering=True,id=c4e14112-ad85-4d49-92a0-fa577e5760f3,network=Network(48bde8d1-e906-4909-996e-97d5280dcfb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e14112-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.405 185195 DEBUG os_vif [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:a8:c7,bridge_name='br-int',has_traffic_filtering=True,id=c4e14112-ad85-4d49-92a0-fa577e5760f3,network=Network(48bde8d1-e906-4909-996e-97d5280dcfb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e14112-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.406 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.406 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.407 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.411 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.411 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4e14112-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.412 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc4e14112-ad, col_values=(('external_ids', {'iface-id': 'c4e14112-ad85-4d49-92a0-fa577e5760f3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:a8:c7', 'vm-uuid': '2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.413 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:52 compute-0 NetworkManager[56090]: <info>  [1769528212.4157] manager: (tapc4e14112-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.415 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.425 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.426 185195 INFO os_vif [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:a8:c7,bridge_name='br-int',has_traffic_filtering=True,id=c4e14112-ad85-4d49-92a0-fa577e5760f3,network=Network(48bde8d1-e906-4909-996e-97d5280dcfb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e14112-ad')
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.553 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.554 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.554 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] No VIF found with MAC fa:16:3e:a4:a8:c7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.556 185195 INFO nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Using config drive
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.582 185195 DEBUG nova.compute.provider_tree [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.610 185195 DEBUG nova.scheduler.client.report [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.649 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.650 185195 DEBUG nova.compute.manager [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.715 185195 DEBUG nova.compute.manager [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.716 185195 DEBUG nova.network.neutron [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.748 185195 INFO nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.779 185195 DEBUG nova.compute.manager [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.897 185195 DEBUG nova.compute.manager [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.899 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.899 185195 INFO nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Creating image(s)
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.900 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Acquiring lock "/var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.900 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "/var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.901 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "/var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.915 185195 DEBUG oslo_concurrency.processutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.979 185195 DEBUG oslo_concurrency.processutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.981 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Acquiring lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.982 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:52 compute-0 nova_compute[185191]: 2026-01-27 15:36:52.998 185195 DEBUG oslo_concurrency.processutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.066 185195 DEBUG oslo_concurrency.processutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.067 185195 DEBUG oslo_concurrency.processutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024,backing_fmt=raw /var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.107 185195 DEBUG nova.policy [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6f5a4b77476d45c798bb724369f7305d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ddd814ae0f3b44a3ae49ec57d44dab05', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.136 185195 INFO nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Creating config drive at /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.config
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.144 185195 DEBUG oslo_concurrency.processutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_chylds2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.277 185195 DEBUG oslo_concurrency.processutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_chylds2" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.319 185195 DEBUG oslo_concurrency.processutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024,backing_fmt=raw /var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642/disk 1073741824" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.320 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.320 185195 DEBUG oslo_concurrency.processutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:36:53 compute-0 kernel: tapc4e14112-ad: entered promiscuous mode
Jan 27 15:36:53 compute-0 NetworkManager[56090]: <info>  [1769528213.3421] manager: (tapc4e14112-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Jan 27 15:36:53 compute-0 ovn_controller[97541]: 2026-01-27T15:36:53Z|00103|binding|INFO|Claiming lport c4e14112-ad85-4d49-92a0-fa577e5760f3 for this chassis.
Jan 27 15:36:53 compute-0 ovn_controller[97541]: 2026-01-27T15:36:53Z|00104|binding|INFO|c4e14112-ad85-4d49-92a0-fa577e5760f3: Claiming fa:16:3e:a4:a8:c7 10.100.0.9
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.345 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.355 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:a8:c7 10.100.0.9'], port_security=['fa:16:3e:a4:a8:c7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48bde8d1-e906-4909-996e-97d5280dcfb1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '85bd0617549142039dbe55541a8fece5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82dd7f40-eb0d-42c8-9980-11f2bbab4495', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdd63418-506a-4397-9a84-8a1d6706b561, chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=c4e14112-ad85-4d49-92a0-fa577e5760f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.357 106793 INFO neutron.agent.ovn.metadata.agent [-] Port c4e14112-ad85-4d49-92a0-fa577e5760f3 in datapath 48bde8d1-e906-4909-996e-97d5280dcfb1 bound to our chassis
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.359 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48bde8d1-e906-4909-996e-97d5280dcfb1
Jan 27 15:36:53 compute-0 ovn_controller[97541]: 2026-01-27T15:36:53Z|00105|binding|INFO|Setting lport c4e14112-ad85-4d49-92a0-fa577e5760f3 ovn-installed in OVS
Jan 27 15:36:53 compute-0 ovn_controller[97541]: 2026-01-27T15:36:53Z|00106|binding|INFO|Setting lport c4e14112-ad85-4d49-92a0-fa577e5760f3 up in Southbound
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.365 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.368 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.375 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d2335c80-cf1d-479e-860f-5b0795c7be40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.376 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48bde8d1-e1 in ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.378 238613 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48bde8d1-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.378 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d39108b4-7baa-4f27-ace6-042d1a538c73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.379 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[36ec6329-4e19-4082-beab-55aef28b9ce7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:53 compute-0 systemd-udevd[250329]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:36:53 compute-0 systemd-machined[156506]: New machine qemu-9-instance-00000009.
Jan 27 15:36:53 compute-0 NetworkManager[56090]: <info>  [1769528213.3983] device (tapc4e14112-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.396 185195 DEBUG oslo_concurrency.processutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.397 185195 DEBUG nova.virt.disk.api [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Checking if we can resize image /var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.397 185195 DEBUG oslo_concurrency.processutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:36:53 compute-0 NetworkManager[56090]: <info>  [1769528213.4033] device (tapc4e14112-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.403 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[3d933aee-aa1a-45a3-9417-1778f14faffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:53 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.429 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad37479-c173-40de-9f5c-655c9a6258e7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.468 185195 DEBUG oslo_concurrency.processutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.469 185195 DEBUG nova.virt.disk.api [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Cannot resize image /var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.470 185195 DEBUG nova.objects.instance [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lazy-loading 'migration_context' on Instance uuid 93810d43-1793-46e8-871a-11bf4aa9a642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.472 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[aabe5df3-606d-45c9-bfc1-6353840c2ba0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.488 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[28ebc679-c226-44e0-8fe2-dfcee66ed517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:53 compute-0 NetworkManager[56090]: <info>  [1769528213.4904] manager: (tap48bde8d1-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/48)
Jan 27 15:36:53 compute-0 systemd-udevd[250336]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.503 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.504 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Ensure instance console log exists: /var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.504 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.505 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.505 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.528 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[4424f736-6a0c-461d-a4c8-7b0c996b79ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.532 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[3143722b-cccf-404b-9742-6d7e3486c260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:53 compute-0 NetworkManager[56090]: <info>  [1769528213.5625] device (tap48bde8d1-e0): carrier: link connected
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.569 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[eecabbde-4a48-4eb6-aec4-74865f472e7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.588 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2078a654-c62c-4e23-acdb-cf02f201da80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48bde8d1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:d0:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578928, 'reachable_time': 21118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250369, 'error': None, 'target': 'ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.606 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[08ad2498-b047-4edc-a0bf-a119366a13ca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe97:d055'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578928, 'tstamp': 578928}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250370, 'error': None, 'target': 'ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.625 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4eaf661d-ab3b-4f94-896f-45fcd8a69082]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48bde8d1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:d0:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578928, 'reachable_time': 21118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250371, 'error': None, 'target': 'ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.664 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[649394d5-08e2-4360-bfbe-78312ff59c99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.728 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a894aa1a-1010-4a06-b1f3-abf5d835929b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.730 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48bde8d1-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.730 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.731 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48bde8d1-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:36:53 compute-0 kernel: tap48bde8d1-e0: entered promiscuous mode
Jan 27 15:36:53 compute-0 NetworkManager[56090]: <info>  [1769528213.7349] manager: (tap48bde8d1-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.734 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.741 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.741 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48bde8d1-e0, col_values=(('external_ids', {'iface-id': '6ae5c324-742b-43ab-97cd-a7094add5cfb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.745 106793 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48bde8d1-e906-4909-996e-97d5280dcfb1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48bde8d1-e906-4909-996e-97d5280dcfb1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 15:36:53 compute-0 ovn_controller[97541]: 2026-01-27T15:36:53Z|00107|binding|INFO|Releasing lport 6ae5c324-742b-43ab-97cd-a7094add5cfb from this chassis (sb_readonly=0)
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.746 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5975524f-c71e-4844-834c-05e5ac20c9eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.746 106793 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: global
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     log         /dev/log local0 debug
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     log-tag     haproxy-metadata-proxy-48bde8d1-e906-4909-996e-97d5280dcfb1
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     user        root
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     group       root
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     maxconn     1024
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     pidfile     /var/lib/neutron/external/pids/48bde8d1-e906-4909-996e-97d5280dcfb1.pid.haproxy
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     daemon
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: defaults
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     log global
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     mode http
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     option httplog
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     option dontlognull
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     option http-server-close
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     option forwardfor
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     retries                 3
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     timeout http-request    30s
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     timeout connect         30s
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     timeout client          32s
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     timeout server          32s
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     timeout http-keep-alive 30s
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: listen listener
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     bind 169.254.169.254:80
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:     http-request add-header X-OVN-Network-ID 48bde8d1-e906-4909-996e-97d5280dcfb1
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 15:36:53 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:36:53.747 106793 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1', 'env', 'PROCESS_TAG=haproxy-48bde8d1-e906-4909-996e-97d5280dcfb1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48bde8d1-e906-4909-996e-97d5280dcfb1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 15:36:53 compute-0 nova_compute[185191]: 2026-01-27 15:36:53.759 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:54 compute-0 nova_compute[185191]: 2026-01-27 15:36:54.035 185195 DEBUG nova.network.neutron [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Successfully created port: 56e8ff16-7b0a-4291-b4db-48805117e7f7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 15:36:54 compute-0 podman[250401]: 2026-01-27 15:36:54.113246903 +0000 UTC m=+0.030813917 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 15:36:54 compute-0 podman[250401]: 2026-01-27 15:36:54.22499513 +0000 UTC m=+0.142562134 container create dfac8daa978187a910e84200062b41f9cf925ce0912928796db59329d0658a74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 15:36:54 compute-0 systemd[1]: Started libpod-conmon-dfac8daa978187a910e84200062b41f9cf925ce0912928796db59329d0658a74.scope.
Jan 27 15:36:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d8b295d5c0224da8e38e43036a1a8fb18b214a097fd36a8018539e55069a71/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 15:36:54 compute-0 podman[250401]: 2026-01-27 15:36:54.416590958 +0000 UTC m=+0.334157972 container init dfac8daa978187a910e84200062b41f9cf925ce0912928796db59329d0658a74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 15:36:54 compute-0 podman[250401]: 2026-01-27 15:36:54.424215302 +0000 UTC m=+0.341782296 container start dfac8daa978187a910e84200062b41f9cf925ce0912928796db59329d0658a74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true)
Jan 27 15:36:54 compute-0 neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1[250415]: [NOTICE]   (250419) : New worker (250421) forked
Jan 27 15:36:54 compute-0 neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1[250415]: [NOTICE]   (250419) : Loading success.
Jan 27 15:36:54 compute-0 nova_compute[185191]: 2026-01-27 15:36:54.721 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528214.7213614, 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:36:54 compute-0 nova_compute[185191]: 2026-01-27 15:36:54.722 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] VM Started (Lifecycle Event)
Jan 27 15:36:54 compute-0 nova_compute[185191]: 2026-01-27 15:36:54.749 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:36:54 compute-0 nova_compute[185191]: 2026-01-27 15:36:54.756 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528214.7215004, 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:36:54 compute-0 nova_compute[185191]: 2026-01-27 15:36:54.756 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] VM Paused (Lifecycle Event)
Jan 27 15:36:54 compute-0 nova_compute[185191]: 2026-01-27 15:36:54.785 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:36:54 compute-0 nova_compute[185191]: 2026-01-27 15:36:54.791 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:36:54 compute-0 nova_compute[185191]: 2026-01-27 15:36:54.821 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:36:55 compute-0 nova_compute[185191]: 2026-01-27 15:36:55.095 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Updating instance_info_cache with network_info: [{"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:36:55 compute-0 nova_compute[185191]: 2026-01-27 15:36:55.116 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:36:55 compute-0 nova_compute[185191]: 2026-01-27 15:36:55.117 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:36:55 compute-0 nova_compute[185191]: 2026-01-27 15:36:55.117 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:36:55 compute-0 nova_compute[185191]: 2026-01-27 15:36:55.414 185195 DEBUG nova.network.neutron [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Successfully updated port: 56e8ff16-7b0a-4291-b4db-48805117e7f7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 15:36:55 compute-0 nova_compute[185191]: 2026-01-27 15:36:55.437 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Acquiring lock "refresh_cache-93810d43-1793-46e8-871a-11bf4aa9a642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:36:55 compute-0 nova_compute[185191]: 2026-01-27 15:36:55.437 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Acquired lock "refresh_cache-93810d43-1793-46e8-871a-11bf4aa9a642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:36:55 compute-0 nova_compute[185191]: 2026-01-27 15:36:55.437 185195 DEBUG nova.network.neutron [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:36:55 compute-0 nova_compute[185191]: 2026-01-27 15:36:55.670 185195 DEBUG nova.compute.manager [req-62c04e45-752c-4edf-a0ac-02da3f2980a4 req-3d77b836-3dd0-41bd-8be6-6b74f127c633 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Received event network-changed-56e8ff16-7b0a-4291-b4db-48805117e7f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:36:55 compute-0 nova_compute[185191]: 2026-01-27 15:36:55.670 185195 DEBUG nova.compute.manager [req-62c04e45-752c-4edf-a0ac-02da3f2980a4 req-3d77b836-3dd0-41bd-8be6-6b74f127c633 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Refreshing instance network info cache due to event network-changed-56e8ff16-7b0a-4291-b4db-48805117e7f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:36:55 compute-0 nova_compute[185191]: 2026-01-27 15:36:55.670 185195 DEBUG oslo_concurrency.lockutils [req-62c04e45-752c-4edf-a0ac-02da3f2980a4 req-3d77b836-3dd0-41bd-8be6-6b74f127c633 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-93810d43-1793-46e8-871a-11bf4aa9a642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:36:55 compute-0 nova_compute[185191]: 2026-01-27 15:36:55.726 185195 DEBUG nova.network.neutron [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 15:36:55 compute-0 nova_compute[185191]: 2026-01-27 15:36:55.971 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:56 compute-0 podman[250437]: 2026-01-27 15:36:56.303978732 +0000 UTC m=+0.059462006 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:36:56 compute-0 nova_compute[185191]: 2026-01-27 15:36:56.610 185195 DEBUG nova.network.neutron [req-a417b11e-88ab-4a8c-b584-1adb3d2a7b62 req-f55498c1-39c9-4e26-ab5a-1e8bdd2d0212 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Updated VIF entry in instance network info cache for port c4e14112-ad85-4d49-92a0-fa577e5760f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:36:56 compute-0 nova_compute[185191]: 2026-01-27 15:36:56.610 185195 DEBUG nova.network.neutron [req-a417b11e-88ab-4a8c-b584-1adb3d2a7b62 req-f55498c1-39c9-4e26-ab5a-1e8bdd2d0212 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Updating instance_info_cache with network_info: [{"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:36:56 compute-0 nova_compute[185191]: 2026-01-27 15:36:56.970 185195 DEBUG oslo_concurrency.lockutils [req-a417b11e-88ab-4a8c-b584-1adb3d2a7b62 req-f55498c1-39c9-4e26-ab5a-1e8bdd2d0212 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:36:57 compute-0 nova_compute[185191]: 2026-01-27 15:36:57.415 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:58 compute-0 nova_compute[185191]: 2026-01-27 15:36:58.907 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.644 185195 DEBUG nova.network.neutron [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Updating instance_info_cache with network_info: [{"id": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "address": "fa:16:3e:bd:1e:c1", "network": {"id": "635501f6-2859-48d6-9c69-27e7a358ab64", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1910978605-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddd814ae0f3b44a3ae49ec57d44dab05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8ff16-7b", "ovs_interfaceid": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.720 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Releasing lock "refresh_cache-93810d43-1793-46e8-871a-11bf4aa9a642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.720 185195 DEBUG nova.compute.manager [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Instance network_info: |[{"id": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "address": "fa:16:3e:bd:1e:c1", "network": {"id": "635501f6-2859-48d6-9c69-27e7a358ab64", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1910978605-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddd814ae0f3b44a3ae49ec57d44dab05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8ff16-7b", "ovs_interfaceid": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.721 185195 DEBUG oslo_concurrency.lockutils [req-62c04e45-752c-4edf-a0ac-02da3f2980a4 req-3d77b836-3dd0-41bd-8be6-6b74f127c633 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-93810d43-1793-46e8-871a-11bf4aa9a642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.721 185195 DEBUG nova.network.neutron [req-62c04e45-752c-4edf-a0ac-02da3f2980a4 req-3d77b836-3dd0-41bd-8be6-6b74f127c633 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Refreshing network info cache for port 56e8ff16-7b0a-4291-b4db-48805117e7f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.724 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Start _get_guest_xml network_info=[{"id": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "address": "fa:16:3e:bd:1e:c1", "network": {"id": "635501f6-2859-48d6-9c69-27e7a358ab64", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1910978605-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddd814ae0f3b44a3ae49ec57d44dab05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8ff16-7b", "ovs_interfaceid": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:34:19Z,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:34:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': 'fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.731 185195 WARNING nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.737 185195 DEBUG nova.virt.libvirt.host [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.738 185195 DEBUG nova.virt.libvirt.host [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.743 185195 DEBUG nova.virt.libvirt.host [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.744 185195 DEBUG nova.virt.libvirt.host [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.744 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.744 185195 DEBUG nova.virt.hardware [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T15:34:18Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='aed09843-3292-40b2-b829-c4ed118e135f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:34:19Z,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:34:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.745 185195 DEBUG nova.virt.hardware [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.745 185195 DEBUG nova.virt.hardware [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.746 185195 DEBUG nova.virt.hardware [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.746 185195 DEBUG nova.virt.hardware [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.746 185195 DEBUG nova.virt.hardware [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.747 185195 DEBUG nova.virt.hardware [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.747 185195 DEBUG nova.virt.hardware [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.747 185195 DEBUG nova.virt.hardware [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.748 185195 DEBUG nova.virt.hardware [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.748 185195 DEBUG nova.virt.hardware [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 15:36:59 compute-0 podman[201073]: time="2026-01-27T15:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.754 185195 DEBUG nova.virt.libvirt.vif [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:36:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-26538732',display_name='tempest-ServerAddressesTestJSON-server-26538732',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-26538732',id=10,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddd814ae0f3b44a3ae49ec57d44dab05',ramdisk_id='',reservation_id='r-6qtfzhqv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-2124421816',owner_user_name='tempest-ServerAddressesTestJSON-2124421816-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:36:52Z,user_data=None,user_id='6f5a4b77476d45c798bb724369f7305d',uuid=93810d43-1793-46e8-871a-11bf4aa9a642,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "address": "fa:16:3e:bd:1e:c1", "network": {"id": "635501f6-2859-48d6-9c69-27e7a358ab64", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1910978605-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddd814ae0f3b44a3ae49ec57d44dab05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8ff16-7b", "ovs_interfaceid": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.755 185195 DEBUG nova.network.os_vif_util [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Converting VIF {"id": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "address": "fa:16:3e:bd:1e:c1", "network": {"id": "635501f6-2859-48d6-9c69-27e7a358ab64", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1910978605-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddd814ae0f3b44a3ae49ec57d44dab05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8ff16-7b", "ovs_interfaceid": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.755 185195 DEBUG nova.network.os_vif_util [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:1e:c1,bridge_name='br-int',has_traffic_filtering=True,id=56e8ff16-7b0a-4291-b4db-48805117e7f7,network=Network(635501f6-2859-48d6-9c69-27e7a358ab64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8ff16-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:36:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.759 185195 DEBUG nova.objects.instance [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lazy-loading 'pci_devices' on Instance uuid 93810d43-1793-46e8-871a-11bf4aa9a642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:36:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4850 "" "Go-http-client/1.1"
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.802 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] End _get_guest_xml xml=<domain type="kvm">
Jan 27 15:36:59 compute-0 nova_compute[185191]:   <uuid>93810d43-1793-46e8-871a-11bf4aa9a642</uuid>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   <name>instance-0000000a</name>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   <memory>131072</memory>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   <vcpu>1</vcpu>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   <metadata>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <nova:name>tempest-ServerAddressesTestJSON-server-26538732</nova:name>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <nova:creationTime>2026-01-27 15:36:59</nova:creationTime>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <nova:flavor name="m1.nano">
Jan 27 15:36:59 compute-0 nova_compute[185191]:         <nova:memory>128</nova:memory>
Jan 27 15:36:59 compute-0 nova_compute[185191]:         <nova:disk>1</nova:disk>
Jan 27 15:36:59 compute-0 nova_compute[185191]:         <nova:swap>0</nova:swap>
Jan 27 15:36:59 compute-0 nova_compute[185191]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 15:36:59 compute-0 nova_compute[185191]:         <nova:vcpus>1</nova:vcpus>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       </nova:flavor>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <nova:owner>
Jan 27 15:36:59 compute-0 nova_compute[185191]:         <nova:user uuid="6f5a4b77476d45c798bb724369f7305d">tempest-ServerAddressesTestJSON-2124421816-project-member</nova:user>
Jan 27 15:36:59 compute-0 nova_compute[185191]:         <nova:project uuid="ddd814ae0f3b44a3ae49ec57d44dab05">tempest-ServerAddressesTestJSON-2124421816</nova:project>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       </nova:owner>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <nova:root type="image" uuid="fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <nova:ports>
Jan 27 15:36:59 compute-0 nova_compute[185191]:         <nova:port uuid="56e8ff16-7b0a-4291-b4db-48805117e7f7">
Jan 27 15:36:59 compute-0 nova_compute[185191]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:         </nova:port>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       </nova:ports>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     </nova:instance>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   </metadata>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   <sysinfo type="smbios">
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <system>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <entry name="manufacturer">RDO</entry>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <entry name="product">OpenStack Compute</entry>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <entry name="serial">93810d43-1793-46e8-871a-11bf4aa9a642</entry>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <entry name="uuid">93810d43-1793-46e8-871a-11bf4aa9a642</entry>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <entry name="family">Virtual Machine</entry>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     </system>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   </sysinfo>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   <os>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <boot dev="hd"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <smbios mode="sysinfo"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   </os>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   <features>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <acpi/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <apic/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <vmcoreinfo/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   </features>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   <clock offset="utc">
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <timer name="hpet" present="no"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   </clock>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   <cpu mode="host-model" match="exact">
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642/disk"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <target dev="vda" bus="virtio"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <disk type="file" device="cdrom">
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642/disk.config"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <target dev="sda" bus="sata"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <interface type="ethernet">
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <mac address="fa:16:3e:bd:1e:c1"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <mtu size="1442"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <target dev="tap56e8ff16-7b"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <serial type="pty">
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <log file="/var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642/console.log" append="off"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     </serial>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <video>
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     </video>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <input type="tablet" bus="usb"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <rng model="virtio">
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <backend model="random">/dev/urandom</backend>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <controller type="usb" index="0"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     <memballoon model="virtio">
Jan 27 15:36:59 compute-0 nova_compute[185191]:       <stats period="10"/>
Jan 27 15:36:59 compute-0 nova_compute[185191]:     </memballoon>
Jan 27 15:36:59 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:36:59 compute-0 nova_compute[185191]: </domain>
Jan 27 15:36:59 compute-0 nova_compute[185191]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.804 185195 DEBUG nova.compute.manager [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Preparing to wait for external event network-vif-plugged-56e8ff16-7b0a-4291-b4db-48805117e7f7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.804 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Acquiring lock "93810d43-1793-46e8-871a-11bf4aa9a642-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.805 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "93810d43-1793-46e8-871a-11bf4aa9a642-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.805 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "93810d43-1793-46e8-871a-11bf4aa9a642-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.806 185195 DEBUG nova.virt.libvirt.vif [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:36:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-26538732',display_name='tempest-ServerAddressesTestJSON-server-26538732',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-26538732',id=10,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddd814ae0f3b44a3ae49ec57d44dab05',ramdisk_id='',reservation_id='r-6qtfzhqv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-2124421816',owner_user_name='tempest-ServerAddressesTestJSON-2124421816-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:36:52Z,user_data=None,user_id='6f5a4b77476d45c798bb724369f7305d',uuid=93810d43-1793-46e8-871a-11bf4aa9a642,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "address": "fa:16:3e:bd:1e:c1", "network": {"id": "635501f6-2859-48d6-9c69-27e7a358ab64", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1910978605-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddd814ae0f3b44a3ae49ec57d44dab05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8ff16-7b", "ovs_interfaceid": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.806 185195 DEBUG nova.network.os_vif_util [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Converting VIF {"id": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "address": "fa:16:3e:bd:1e:c1", "network": {"id": "635501f6-2859-48d6-9c69-27e7a358ab64", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1910978605-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddd814ae0f3b44a3ae49ec57d44dab05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8ff16-7b", "ovs_interfaceid": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.807 185195 DEBUG nova.network.os_vif_util [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:1e:c1,bridge_name='br-int',has_traffic_filtering=True,id=56e8ff16-7b0a-4291-b4db-48805117e7f7,network=Network(635501f6-2859-48d6-9c69-27e7a358ab64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8ff16-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.808 185195 DEBUG os_vif [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:1e:c1,bridge_name='br-int',has_traffic_filtering=True,id=56e8ff16-7b0a-4291-b4db-48805117e7f7,network=Network(635501f6-2859-48d6-9c69-27e7a358ab64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8ff16-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.809 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.809 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.810 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.814 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.814 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56e8ff16-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.815 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap56e8ff16-7b, col_values=(('external_ids', {'iface-id': '56e8ff16-7b0a-4291-b4db-48805117e7f7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:1e:c1', 'vm-uuid': '93810d43-1793-46e8-871a-11bf4aa9a642'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:36:59 compute-0 NetworkManager[56090]: <info>  [1769528219.8184] manager: (tap56e8ff16-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.819 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.826 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.828 185195 INFO os_vif [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:1e:c1,bridge_name='br-int',has_traffic_filtering=True,id=56e8ff16-7b0a-4291-b4db-48805117e7f7,network=Network(635501f6-2859-48d6-9c69-27e7a358ab64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8ff16-7b')
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.982 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.983 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.983 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] No VIF found with MAC fa:16:3e:bd:1e:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 15:36:59 compute-0 nova_compute[185191]: 2026-01-27 15:36:59.984 185195 INFO nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Using config drive
Jan 27 15:37:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:00.257 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:37:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:00.258 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:37:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:00.259 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:37:00 compute-0 nova_compute[185191]: 2026-01-27 15:37:00.973 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:01 compute-0 openstack_network_exporter[204239]: ERROR   15:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:37:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:37:01 compute-0 openstack_network_exporter[204239]: ERROR   15:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:37:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.134 185195 INFO nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Creating config drive at /var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642/disk.config
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.141 185195 DEBUG oslo_concurrency.processutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvsb5wmeg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.267 185195 DEBUG oslo_concurrency.processutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvsb5wmeg" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:37:02 compute-0 kernel: tap56e8ff16-7b: entered promiscuous mode
Jan 27 15:37:02 compute-0 NetworkManager[56090]: <info>  [1769528222.3272] manager: (tap56e8ff16-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Jan 27 15:37:02 compute-0 ovn_controller[97541]: 2026-01-27T15:37:02Z|00108|binding|INFO|Claiming lport 56e8ff16-7b0a-4291-b4db-48805117e7f7 for this chassis.
Jan 27 15:37:02 compute-0 ovn_controller[97541]: 2026-01-27T15:37:02Z|00109|binding|INFO|56e8ff16-7b0a-4291-b4db-48805117e7f7: Claiming fa:16:3e:bd:1e:c1 10.100.0.5
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.329 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:02 compute-0 ovn_controller[97541]: 2026-01-27T15:37:02Z|00110|binding|INFO|Setting lport 56e8ff16-7b0a-4291-b4db-48805117e7f7 ovn-installed in OVS
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.351 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:02 compute-0 systemd-machined[156506]: New machine qemu-10-instance-0000000a.
Jan 27 15:37:02 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.377 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:1e:c1 10.100.0.5'], port_security=['fa:16:3e:bd:1e:c1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '93810d43-1793-46e8-871a-11bf4aa9a642', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-635501f6-2859-48d6-9c69-27e7a358ab64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddd814ae0f3b44a3ae49ec57d44dab05', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5cd50542-3e19-4b42-92d9-02a5d4992532', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d92f8e9b-8bbb-43b3-8fe3-49de0322b2de, chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=56e8ff16-7b0a-4291-b4db-48805117e7f7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:37:02 compute-0 ovn_controller[97541]: 2026-01-27T15:37:02Z|00111|binding|INFO|Setting lport 56e8ff16-7b0a-4291-b4db-48805117e7f7 up in Southbound
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.379 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 56e8ff16-7b0a-4291-b4db-48805117e7f7 in datapath 635501f6-2859-48d6-9c69-27e7a358ab64 bound to our chassis
Jan 27 15:37:02 compute-0 systemd-udevd[250484]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.382 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 635501f6-2859-48d6-9c69-27e7a358ab64
Jan 27 15:37:02 compute-0 NetworkManager[56090]: <info>  [1769528222.3951] device (tap56e8ff16-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.394 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e8b9d1-197b-41b3-a4fb-57ad2b3ebbe4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.395 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap635501f6-21 in ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 15:37:02 compute-0 NetworkManager[56090]: <info>  [1769528222.3961] device (tap56e8ff16-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.397 238613 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap635501f6-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.397 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[b30ce244-ba5e-4c0b-b744-42939fa41054]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.399 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8a235a4a-69b1-4454-9b87-7aa9b1bb720c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.410 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[4d915a6a-08d8-4583-bbda-0b4844da2b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.424 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[25a94739-40f9-4fdb-92f5-70fc995df4a0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.452 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[c08e17ad-b80a-4002-8c12-e0249972856b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.459 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2cf965-f629-4a94-8afc-511d22ab4e18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:02 compute-0 NetworkManager[56090]: <info>  [1769528222.4610] manager: (tap635501f6-20): new Veth device (/org/freedesktop/NetworkManager/Devices/52)
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.490 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[203942a7-eb8b-4395-aec7-ece19a548752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.495 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[581d0f75-c939-482e-9c18-1f6953d9cc1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:02 compute-0 NetworkManager[56090]: <info>  [1769528222.5203] device (tap635501f6-20): carrier: link connected
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.529 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[ba3295da-cd36-4862-bfd6-3c60cfd56e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.547 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[95f2ec4c-7b65-43b3-8b9b-7bb8732072b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap635501f6-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:5f:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579824, 'reachable_time': 16521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250517, 'error': None, 'target': 'ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.562 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fc068837-d99b-4a14-a294-dae80dd11507]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:5f58'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579824, 'tstamp': 579824}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250518, 'error': None, 'target': 'ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.579 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[eef451b7-26f2-4b54-a2ea-2097372f9e81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap635501f6-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:5f:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579824, 'reachable_time': 16521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250519, 'error': None, 'target': 'ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.612 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c73203-15e3-489c-a04d-004d80b75cbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:02 compute-0 sshd-session[250474]: Invalid user ubuntu from 45.148.10.240 port 36642
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.676 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[17e6c1c9-c770-4481-ba94-de7a81ca2360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.678 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap635501f6-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.678 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.679 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap635501f6-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:37:02 compute-0 NetworkManager[56090]: <info>  [1769528222.6822] manager: (tap635501f6-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.681 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.684 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:02 compute-0 kernel: tap635501f6-20: entered promiscuous mode
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.685 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap635501f6-20, col_values=(('external_ids', {'iface-id': '65f91ece-433a-4f35-a402-452ed4c65b96'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.686 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:02 compute-0 ovn_controller[97541]: 2026-01-27T15:37:02Z|00112|binding|INFO|Releasing lport 65f91ece-433a-4f35-a402-452ed4c65b96 from this chassis (sb_readonly=0)
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.701 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.701 106793 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/635501f6-2859-48d6-9c69-27e7a358ab64.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/635501f6-2859-48d6-9c69-27e7a358ab64.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.703 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[67fd3d32-ea55-4208-a968-ff5ff726c655]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.704 106793 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: global
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     log         /dev/log local0 debug
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     log-tag     haproxy-metadata-proxy-635501f6-2859-48d6-9c69-27e7a358ab64
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     user        root
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     group       root
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     maxconn     1024
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     pidfile     /var/lib/neutron/external/pids/635501f6-2859-48d6-9c69-27e7a358ab64.pid.haproxy
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     daemon
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: defaults
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     log global
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     mode http
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     option httplog
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     option dontlognull
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     option http-server-close
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     option forwardfor
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     retries                 3
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     timeout http-request    30s
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     timeout connect         30s
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     timeout client          32s
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     timeout server          32s
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     timeout http-keep-alive 30s
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: listen listener
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     bind 169.254.169.254:80
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:     http-request add-header X-OVN-Network-ID 635501f6-2859-48d6-9c69-27e7a358ab64
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 15:37:02 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:02.704 106793 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64', 'env', 'PROCESS_TAG=haproxy-635501f6-2859-48d6-9c69-27e7a358ab64', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/635501f6-2859-48d6-9c69-27e7a358ab64.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 15:37:02 compute-0 sshd-session[250474]: Connection closed by invalid user ubuntu 45.148.10.240 port 36642 [preauth]
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.790 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528222.7902753, 93810d43-1793-46e8-871a-11bf4aa9a642 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.790 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] VM Started (Lifecycle Event)
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.857 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.863 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528222.7903924, 93810d43-1793-46e8-871a-11bf4aa9a642 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.863 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] VM Paused (Lifecycle Event)
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.894 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.899 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:37:02 compute-0 nova_compute[185191]: 2026-01-27 15:37:02.925 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:37:03 compute-0 podman[250557]: 2026-01-27 15:37:03.072259347 +0000 UTC m=+0.028031123 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 15:37:03 compute-0 podman[250557]: 2026-01-27 15:37:03.166885214 +0000 UTC m=+0.122656960 container create af6b9b32a28f221416ed42565c023318e5355d9302c0f63740124f579f0f45c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 15:37:03 compute-0 systemd[1]: Started libpod-conmon-af6b9b32a28f221416ed42565c023318e5355d9302c0f63740124f579f0f45c3.scope.
Jan 27 15:37:03 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd23ca593a71b8d453a4e7485017ea9118f0633d3f14602e11da3c5dbea53295/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 15:37:03 compute-0 podman[250557]: 2026-01-27 15:37:03.320103243 +0000 UTC m=+0.275875039 container init af6b9b32a28f221416ed42565c023318e5355d9302c0f63740124f579f0f45c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 15:37:03 compute-0 podman[250557]: 2026-01-27 15:37:03.32820461 +0000 UTC m=+0.283976366 container start af6b9b32a28f221416ed42565c023318e5355d9302c0f63740124f579f0f45c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:37:03 compute-0 neutron-haproxy-ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64[250572]: [NOTICE]   (250576) : New worker (250578) forked
Jan 27 15:37:03 compute-0 neutron-haproxy-ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64[250572]: [NOTICE]   (250576) : Loading success.
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.826 185195 DEBUG nova.compute.manager [req-56567c8d-94e4-4eea-85ef-ee779a606d20 req-5a0cf403-365d-493c-bfea-97e328d2f460 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received event network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.828 185195 DEBUG oslo_concurrency.lockutils [req-56567c8d-94e4-4eea-85ef-ee779a606d20 req-5a0cf403-365d-493c-bfea-97e328d2f460 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.829 185195 DEBUG oslo_concurrency.lockutils [req-56567c8d-94e4-4eea-85ef-ee779a606d20 req-5a0cf403-365d-493c-bfea-97e328d2f460 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.829 185195 DEBUG oslo_concurrency.lockutils [req-56567c8d-94e4-4eea-85ef-ee779a606d20 req-5a0cf403-365d-493c-bfea-97e328d2f460 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.829 185195 DEBUG nova.compute.manager [req-56567c8d-94e4-4eea-85ef-ee779a606d20 req-5a0cf403-365d-493c-bfea-97e328d2f460 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Processing event network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.831 185195 DEBUG nova.compute.manager [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Instance event wait completed in 9 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.836 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528223.8365867, 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.837 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] VM Resumed (Lifecycle Event)
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.840 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.846 185195 INFO nova.virt.libvirt.driver [-] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Instance spawned successfully.
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.847 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.881 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.893 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.900 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.901 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.902 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.903 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.904 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.906 185195 DEBUG nova.virt.libvirt.driver [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.951 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.995 185195 INFO nova.compute.manager [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Took 20.53 seconds to spawn the instance on the hypervisor.
Jan 27 15:37:03 compute-0 nova_compute[185191]: 2026-01-27 15:37:03.996 185195 DEBUG nova.compute.manager [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:37:04 compute-0 nova_compute[185191]: 2026-01-27 15:37:04.098 185195 DEBUG nova.network.neutron [req-62c04e45-752c-4edf-a0ac-02da3f2980a4 req-3d77b836-3dd0-41bd-8be6-6b74f127c633 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Updated VIF entry in instance network info cache for port 56e8ff16-7b0a-4291-b4db-48805117e7f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:37:04 compute-0 nova_compute[185191]: 2026-01-27 15:37:04.099 185195 DEBUG nova.network.neutron [req-62c04e45-752c-4edf-a0ac-02da3f2980a4 req-3d77b836-3dd0-41bd-8be6-6b74f127c633 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Updating instance_info_cache with network_info: [{"id": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "address": "fa:16:3e:bd:1e:c1", "network": {"id": "635501f6-2859-48d6-9c69-27e7a358ab64", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1910978605-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddd814ae0f3b44a3ae49ec57d44dab05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8ff16-7b", "ovs_interfaceid": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:37:04 compute-0 nova_compute[185191]: 2026-01-27 15:37:04.106 185195 INFO nova.compute.manager [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Took 21.18 seconds to build instance.
Jan 27 15:37:04 compute-0 nova_compute[185191]: 2026-01-27 15:37:04.126 185195 DEBUG oslo_concurrency.lockutils [req-62c04e45-752c-4edf-a0ac-02da3f2980a4 req-3d77b836-3dd0-41bd-8be6-6b74f127c633 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-93810d43-1793-46e8-871a-11bf4aa9a642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:37:04 compute-0 nova_compute[185191]: 2026-01-27 15:37:04.128 185195 DEBUG oslo_concurrency.lockutils [None req-34be022a-4ccb-4b7a-9d65-96b62ad10db7 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:37:04 compute-0 nova_compute[185191]: 2026-01-27 15:37:04.780 185195 DEBUG nova.objects.instance [None req-b69f740c-cc33-4358-ad7a-092406727446 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lazy-loading 'flavor' on Instance uuid b4f95e32-4dde-475f-bf71-8bd9391938a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:37:04 compute-0 nova_compute[185191]: 2026-01-27 15:37:04.819 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:04 compute-0 nova_compute[185191]: 2026-01-27 15:37:04.830 185195 DEBUG oslo_concurrency.lockutils [None req-b69f740c-cc33-4358-ad7a-092406727446 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Acquiring lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:37:04 compute-0 nova_compute[185191]: 2026-01-27 15:37:04.831 185195 DEBUG oslo_concurrency.lockutils [None req-b69f740c-cc33-4358-ad7a-092406727446 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Acquired lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:37:05 compute-0 nova_compute[185191]: 2026-01-27 15:37:05.978 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.284 185195 DEBUG nova.compute.manager [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received event network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.285 185195 DEBUG oslo_concurrency.lockutils [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.285 185195 DEBUG oslo_concurrency.lockutils [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.286 185195 DEBUG oslo_concurrency.lockutils [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.286 185195 DEBUG nova.compute.manager [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] No waiting events found dispatching network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.286 185195 WARNING nova.compute.manager [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received unexpected event network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 for instance with vm_state active and task_state None.
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.286 185195 DEBUG nova.compute.manager [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Received event network-vif-plugged-56e8ff16-7b0a-4291-b4db-48805117e7f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.287 185195 DEBUG oslo_concurrency.lockutils [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "93810d43-1793-46e8-871a-11bf4aa9a642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.287 185195 DEBUG oslo_concurrency.lockutils [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "93810d43-1793-46e8-871a-11bf4aa9a642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.287 185195 DEBUG oslo_concurrency.lockutils [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "93810d43-1793-46e8-871a-11bf4aa9a642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.288 185195 DEBUG nova.compute.manager [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Processing event network-vif-plugged-56e8ff16-7b0a-4291-b4db-48805117e7f7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.288 185195 DEBUG nova.compute.manager [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Received event network-vif-plugged-56e8ff16-7b0a-4291-b4db-48805117e7f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.288 185195 DEBUG oslo_concurrency.lockutils [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "93810d43-1793-46e8-871a-11bf4aa9a642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.289 185195 DEBUG oslo_concurrency.lockutils [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "93810d43-1793-46e8-871a-11bf4aa9a642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.289 185195 DEBUG oslo_concurrency.lockutils [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "93810d43-1793-46e8-871a-11bf4aa9a642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.289 185195 DEBUG nova.compute.manager [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] No waiting events found dispatching network-vif-plugged-56e8ff16-7b0a-4291-b4db-48805117e7f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.289 185195 WARNING nova.compute.manager [req-81f424a7-01ed-4684-b574-1c7a65b1f7bd req-486f7f41-5968-4fe1-a639-38e14e4131be 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Received unexpected event network-vif-plugged-56e8ff16-7b0a-4291-b4db-48805117e7f7 for instance with vm_state building and task_state spawning.
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.290 185195 DEBUG nova.compute.manager [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.295 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528226.2949817, 93810d43-1793-46e8-871a-11bf4aa9a642 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.295 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] VM Resumed (Lifecycle Event)
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.297 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.302 185195 INFO nova.virt.libvirt.driver [-] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Instance spawned successfully.
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.302 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.327 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.334 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.354 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.355 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.356 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.357 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.357 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.358 185195 DEBUG nova.virt.libvirt.driver [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.370 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.459 185195 INFO nova.compute.manager [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Took 13.56 seconds to spawn the instance on the hypervisor.
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.459 185195 DEBUG nova.compute.manager [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.534 185195 INFO nova.compute.manager [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Took 14.37 seconds to build instance.
Jan 27 15:37:06 compute-0 nova_compute[185191]: 2026-01-27 15:37:06.571 185195 DEBUG oslo_concurrency.lockutils [None req-7bcf9697-de11-4b2e-8614-637fdec4924c 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "93810d43-1793-46e8-871a-11bf4aa9a642" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:37:07 compute-0 nova_compute[185191]: 2026-01-27 15:37:07.901 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:07 compute-0 nova_compute[185191]: 2026-01-27 15:37:07.990 185195 DEBUG nova.network.neutron [None req-b69f740c-cc33-4358-ad7a-092406727446 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:37:08 compute-0 nova_compute[185191]: 2026-01-27 15:37:08.193 185195 DEBUG nova.compute.manager [req-4ea40aa4-2a53-4b99-84bc-8f27cbb5075a req-bfc307ac-df62-4b4c-8e3c-2e50211cc0ae 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Received event network-changed-33cb1013-4786-49f5-a482-721c6aeb907b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:37:08 compute-0 nova_compute[185191]: 2026-01-27 15:37:08.194 185195 DEBUG nova.compute.manager [req-4ea40aa4-2a53-4b99-84bc-8f27cbb5075a req-bfc307ac-df62-4b4c-8e3c-2e50211cc0ae 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Refreshing instance network info cache due to event network-changed-33cb1013-4786-49f5-a482-721c6aeb907b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:37:08 compute-0 nova_compute[185191]: 2026-01-27 15:37:08.194 185195 DEBUG oslo_concurrency.lockutils [req-4ea40aa4-2a53-4b99-84bc-8f27cbb5075a req-bfc307ac-df62-4b4c-8e3c-2e50211cc0ae 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:37:08 compute-0 podman[250588]: 2026-01-27 15:37:08.304467508 +0000 UTC m=+0.063386310 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:37:08 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:08.493 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:37:08 compute-0 nova_compute[185191]: 2026-01-27 15:37:08.493 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:08 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:08.494 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:37:08 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:08.495 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:37:09 compute-0 nova_compute[185191]: 2026-01-27 15:37:09.598 185195 DEBUG nova.compute.manager [req-9fdddc8b-e1d2-48ff-9da9-c499c3f42650 req-50d4c60f-2816-42d7-a87a-ddd28921d873 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received event network-changed-c4e14112-ad85-4d49-92a0-fa577e5760f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:37:09 compute-0 nova_compute[185191]: 2026-01-27 15:37:09.599 185195 DEBUG nova.compute.manager [req-9fdddc8b-e1d2-48ff-9da9-c499c3f42650 req-50d4c60f-2816-42d7-a87a-ddd28921d873 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Refreshing instance network info cache due to event network-changed-c4e14112-ad85-4d49-92a0-fa577e5760f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:37:09 compute-0 nova_compute[185191]: 2026-01-27 15:37:09.599 185195 DEBUG oslo_concurrency.lockutils [req-9fdddc8b-e1d2-48ff-9da9-c499c3f42650 req-50d4c60f-2816-42d7-a87a-ddd28921d873 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:37:09 compute-0 nova_compute[185191]: 2026-01-27 15:37:09.599 185195 DEBUG oslo_concurrency.lockutils [req-9fdddc8b-e1d2-48ff-9da9-c499c3f42650 req-50d4c60f-2816-42d7-a87a-ddd28921d873 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:37:09 compute-0 nova_compute[185191]: 2026-01-27 15:37:09.600 185195 DEBUG nova.network.neutron [req-9fdddc8b-e1d2-48ff-9da9-c499c3f42650 req-50d4c60f-2816-42d7-a87a-ddd28921d873 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Refreshing network info cache for port c4e14112-ad85-4d49-92a0-fa577e5760f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:37:09 compute-0 nova_compute[185191]: 2026-01-27 15:37:09.822 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.060 185195 DEBUG oslo_concurrency.lockutils [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Acquiring lock "93810d43-1793-46e8-871a-11bf4aa9a642" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.061 185195 DEBUG oslo_concurrency.lockutils [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "93810d43-1793-46e8-871a-11bf4aa9a642" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.061 185195 DEBUG oslo_concurrency.lockutils [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Acquiring lock "93810d43-1793-46e8-871a-11bf4aa9a642-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.062 185195 DEBUG oslo_concurrency.lockutils [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "93810d43-1793-46e8-871a-11bf4aa9a642-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.062 185195 DEBUG oslo_concurrency.lockutils [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "93810d43-1793-46e8-871a-11bf4aa9a642-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.063 185195 INFO nova.compute.manager [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Terminating instance
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.064 185195 DEBUG nova.compute.manager [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 15:37:10 compute-0 kernel: tap56e8ff16-7b (unregistering): left promiscuous mode
Jan 27 15:37:10 compute-0 NetworkManager[56090]: <info>  [1769528230.1040] device (tap56e8ff16-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 15:37:10 compute-0 ovn_controller[97541]: 2026-01-27T15:37:10Z|00113|binding|INFO|Releasing lport 56e8ff16-7b0a-4291-b4db-48805117e7f7 from this chassis (sb_readonly=0)
Jan 27 15:37:10 compute-0 ovn_controller[97541]: 2026-01-27T15:37:10Z|00114|binding|INFO|Setting lport 56e8ff16-7b0a-4291-b4db-48805117e7f7 down in Southbound
Jan 27 15:37:10 compute-0 ovn_controller[97541]: 2026-01-27T15:37:10Z|00115|binding|INFO|Removing iface tap56e8ff16-7b ovn-installed in OVS
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.116 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:10.126 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:1e:c1 10.100.0.5'], port_security=['fa:16:3e:bd:1e:c1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '93810d43-1793-46e8-871a-11bf4aa9a642', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-635501f6-2859-48d6-9c69-27e7a358ab64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddd814ae0f3b44a3ae49ec57d44dab05', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5cd50542-3e19-4b42-92d9-02a5d4992532', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d92f8e9b-8bbb-43b3-8fe3-49de0322b2de, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=56e8ff16-7b0a-4291-b4db-48805117e7f7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:37:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:10.127 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 56e8ff16-7b0a-4291-b4db-48805117e7f7 in datapath 635501f6-2859-48d6-9c69-27e7a358ab64 unbound from our chassis
Jan 27 15:37:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:10.130 106793 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 635501f6-2859-48d6-9c69-27e7a358ab64, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.130 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:10.131 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e892645f-c36d-4d22-a6dc-9b33f01ad87a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:10.132 106793 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64 namespace which is not needed anymore
Jan 27 15:37:10 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Jan 27 15:37:10 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 4.279s CPU time.
Jan 27 15:37:10 compute-0 systemd-machined[156506]: Machine qemu-10-instance-0000000a terminated.
Jan 27 15:37:10 compute-0 neutron-haproxy-ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64[250572]: [NOTICE]   (250576) : haproxy version is 2.8.14-c23fe91
Jan 27 15:37:10 compute-0 neutron-haproxy-ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64[250572]: [NOTICE]   (250576) : path to executable is /usr/sbin/haproxy
Jan 27 15:37:10 compute-0 neutron-haproxy-ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64[250572]: [WARNING]  (250576) : Exiting Master process...
Jan 27 15:37:10 compute-0 neutron-haproxy-ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64[250572]: [WARNING]  (250576) : Exiting Master process...
Jan 27 15:37:10 compute-0 neutron-haproxy-ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64[250572]: [ALERT]    (250576) : Current worker (250578) exited with code 143 (Terminated)
Jan 27 15:37:10 compute-0 neutron-haproxy-ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64[250572]: [WARNING]  (250576) : All workers exited. Exiting... (0)
Jan 27 15:37:10 compute-0 systemd[1]: libpod-af6b9b32a28f221416ed42565c023318e5355d9302c0f63740124f579f0f45c3.scope: Deactivated successfully.
Jan 27 15:37:10 compute-0 podman[250628]: 2026-01-27 15:37:10.306366103 +0000 UTC m=+0.074065528 container died af6b9b32a28f221416ed42565c023318e5355d9302c0f63740124f579f0f45c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.327 185195 INFO nova.virt.libvirt.driver [-] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Instance destroyed successfully.
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.328 185195 DEBUG nova.objects.instance [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lazy-loading 'resources' on Instance uuid 93810d43-1793-46e8-871a-11bf4aa9a642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.390 185195 DEBUG nova.virt.libvirt.vif [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:36:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-26538732',display_name='tempest-ServerAddressesTestJSON-server-26538732',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-26538732',id=10,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:37:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ddd814ae0f3b44a3ae49ec57d44dab05',ramdisk_id='',reservation_id='r-6qtfzhqv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-2124421816',owner_user_name='tempest-ServerAddressesTestJSON-2124421816-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T15:37:06Z,user_data=None,user_id='6f5a4b77476d45c798bb724369f7305d',uuid=93810d43-1793-46e8-871a-11bf4aa9a642,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "address": "fa:16:3e:bd:1e:c1", "network": {"id": "635501f6-2859-48d6-9c69-27e7a358ab64", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1910978605-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddd814ae0f3b44a3ae49ec57d44dab05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8ff16-7b", "ovs_interfaceid": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.390 185195 DEBUG nova.network.os_vif_util [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Converting VIF {"id": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "address": "fa:16:3e:bd:1e:c1", "network": {"id": "635501f6-2859-48d6-9c69-27e7a358ab64", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1910978605-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddd814ae0f3b44a3ae49ec57d44dab05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8ff16-7b", "ovs_interfaceid": "56e8ff16-7b0a-4291-b4db-48805117e7f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.391 185195 DEBUG nova.network.os_vif_util [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:1e:c1,bridge_name='br-int',has_traffic_filtering=True,id=56e8ff16-7b0a-4291-b4db-48805117e7f7,network=Network(635501f6-2859-48d6-9c69-27e7a358ab64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8ff16-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.392 185195 DEBUG os_vif [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:1e:c1,bridge_name='br-int',has_traffic_filtering=True,id=56e8ff16-7b0a-4291-b4db-48805117e7f7,network=Network(635501f6-2859-48d6-9c69-27e7a358ab64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8ff16-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.394 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.394 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56e8ff16-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.397 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.400 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.402 185195 INFO os_vif [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:1e:c1,bridge_name='br-int',has_traffic_filtering=True,id=56e8ff16-7b0a-4291-b4db-48805117e7f7,network=Network(635501f6-2859-48d6-9c69-27e7a358ab64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8ff16-7b')
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.403 185195 INFO nova.virt.libvirt.driver [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Deleting instance files /var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642_del
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.404 185195 INFO nova.virt.libvirt.driver [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Deletion of /var/lib/nova/instances/93810d43-1793-46e8-871a-11bf4aa9a642_del complete
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.462 185195 INFO nova.compute.manager [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Took 0.40 seconds to destroy the instance on the hypervisor.
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.463 185195 DEBUG oslo.service.loopingcall [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.463 185195 DEBUG nova.compute.manager [-] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.463 185195 DEBUG nova.network.neutron [-] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 15:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af6b9b32a28f221416ed42565c023318e5355d9302c0f63740124f579f0f45c3-userdata-shm.mount: Deactivated successfully.
Jan 27 15:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd23ca593a71b8d453a4e7485017ea9118f0633d3f14602e11da3c5dbea53295-merged.mount: Deactivated successfully.
Jan 27 15:37:10 compute-0 podman[250671]: 2026-01-27 15:37:10.549489212 +0000 UTC m=+0.079622866 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter)
Jan 27 15:37:10 compute-0 podman[250669]: 2026-01-27 15:37:10.567345511 +0000 UTC m=+0.105416878 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 15:37:10 compute-0 podman[250670]: 2026-01-27 15:37:10.598391174 +0000 UTC m=+0.132424832 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:37:10 compute-0 podman[250628]: 2026-01-27 15:37:10.755054335 +0000 UTC m=+0.522753760 container cleanup af6b9b32a28f221416ed42565c023318e5355d9302c0f63740124f579f0f45c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 15:37:10 compute-0 systemd[1]: libpod-conmon-af6b9b32a28f221416ed42565c023318e5355d9302c0f63740124f579f0f45c3.scope: Deactivated successfully.
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.806 185195 DEBUG nova.network.neutron [None req-b69f740c-cc33-4358-ad7a-092406727446 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Updating instance_info_cache with network_info: [{"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.847 185195 DEBUG oslo_concurrency.lockutils [None req-b69f740c-cc33-4358-ad7a-092406727446 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Releasing lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.847 185195 DEBUG nova.compute.manager [None req-b69f740c-cc33-4358-ad7a-092406727446 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.848 185195 DEBUG nova.compute.manager [None req-b69f740c-cc33-4358-ad7a-092406727446 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] network_info to inject: |[{"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.851 185195 DEBUG oslo_concurrency.lockutils [req-4ea40aa4-2a53-4b99-84bc-8f27cbb5075a req-bfc307ac-df62-4b4c-8e3c-2e50211cc0ae 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.851 185195 DEBUG nova.network.neutron [req-4ea40aa4-2a53-4b99-84bc-8f27cbb5075a req-bfc307ac-df62-4b4c-8e3c-2e50211cc0ae 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Refreshing network info cache for port 33cb1013-4786-49f5-a482-721c6aeb907b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:37:10 compute-0 nova_compute[185191]: 2026-01-27 15:37:10.979 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.991 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.992 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.997 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 15:37:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:10.998 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82c957adbc17ae7d91b95e243ef95edcae050b803dbf40e883e7549d3d32b40a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 15:37:11 compute-0 podman[250736]: 2026-01-27 15:37:11.059746896 +0000 UTC m=+0.274865032 container remove af6b9b32a28f221416ed42565c023318e5355d9302c0f63740124f579f0f45c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:37:11 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:11.066 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fc447e4f-745f-4545-96b5-a675ecc097f9]: (4, ('Tue Jan 27 03:37:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64 (af6b9b32a28f221416ed42565c023318e5355d9302c0f63740124f579f0f45c3)\naf6b9b32a28f221416ed42565c023318e5355d9302c0f63740124f579f0f45c3\nTue Jan 27 03:37:10 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64 (af6b9b32a28f221416ed42565c023318e5355d9302c0f63740124f579f0f45c3)\naf6b9b32a28f221416ed42565c023318e5355d9302c0f63740124f579f0f45c3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:11 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:11.069 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[bd5e671b-bc1d-4976-832d-ce88baa4a717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:11 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:11.070 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap635501f6-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:37:11 compute-0 nova_compute[185191]: 2026-01-27 15:37:11.072 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:11 compute-0 kernel: tap635501f6-20: left promiscuous mode
Jan 27 15:37:11 compute-0 nova_compute[185191]: 2026-01-27 15:37:11.089 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:11 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:11.093 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f86319fc-9367-4840-a9ea-2993b80e320f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:11 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:11.122 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2c7a08-2eae-48ed-8588-9a944aa7869b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:11 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:11.124 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8f6e3686-b74e-4e7b-a0f0-9a417d81dd6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:11 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:11.141 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4a1f5705-4c65-4d7a-9963-e7ea69e8ae77]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579817, 'reachable_time': 40786, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250751, 'error': None, 'target': 'ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:11 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:11.143 107308 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-635501f6-2859-48d6-9c69-27e7a358ab64 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 15:37:11 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:11.144 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[b7aa2322-7b1a-45de-90fd-703e526a8d39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d635501f6\x2d2859\x2d48d6\x2d9c69\x2d27e7a358ab64.mount: Deactivated successfully.
Jan 27 15:37:12 compute-0 nova_compute[185191]: 2026-01-27 15:37:12.532 185195 DEBUG nova.network.neutron [req-9fdddc8b-e1d2-48ff-9da9-c499c3f42650 req-50d4c60f-2816-42d7-a87a-ddd28921d873 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Updated VIF entry in instance network info cache for port c4e14112-ad85-4d49-92a0-fa577e5760f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:37:12 compute-0 nova_compute[185191]: 2026-01-27 15:37:12.532 185195 DEBUG nova.network.neutron [req-9fdddc8b-e1d2-48ff-9da9-c499c3f42650 req-50d4c60f-2816-42d7-a87a-ddd28921d873 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Updating instance_info_cache with network_info: [{"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:37:12 compute-0 nova_compute[185191]: 2026-01-27 15:37:12.574 185195 DEBUG oslo_concurrency.lockutils [req-9fdddc8b-e1d2-48ff-9da9-c499c3f42650 req-50d4c60f-2816-42d7-a87a-ddd28921d873 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:37:12 compute-0 nova_compute[185191]: 2026-01-27 15:37:12.604 185195 DEBUG nova.network.neutron [-] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:37:12 compute-0 nova_compute[185191]: 2026-01-27 15:37:12.633 185195 INFO nova.compute.manager [-] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Took 2.17 seconds to deallocate network for instance.
Jan 27 15:37:12 compute-0 nova_compute[185191]: 2026-01-27 15:37:12.698 185195 DEBUG oslo_concurrency.lockutils [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:37:12 compute-0 nova_compute[185191]: 2026-01-27 15:37:12.699 185195 DEBUG oslo_concurrency.lockutils [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:37:12 compute-0 nova_compute[185191]: 2026-01-27 15:37:12.795 185195 DEBUG nova.compute.provider_tree [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:37:12 compute-0 nova_compute[185191]: 2026-01-27 15:37:12.830 185195 DEBUG nova.scheduler.client.report [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:37:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:13.193 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1858 Content-Type: application/json Date: Tue, 27 Jan 2026 15:37:11 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-78e81919-24bf-4051-a5ca-52dfc1ce0bec x-openstack-request-id: req-78e81919-24bf-4051-a5ca-52dfc1ce0bec _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 15:37:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:13.193 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a", "name": "tempest-ServerActionsTestJSON-server-1366686872", "status": "ACTIVE", "tenant_id": "85bd0617549142039dbe55541a8fece5", "user_id": "37fdc28d88dc42689e835e91aad4c2d3", "metadata": {}, "hostId": "1f7300eb44e10075c6cc0cb140aad0f7d6c6c299bdbe0bd07ddb3879", "image": {"id": "fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87"}]}, "flavor": {"id": "aed09843-3292-40b2-b829-c4ed118e135f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/aed09843-3292-40b2-b829-c4ed118e135f"}]}, "created": "2026-01-27T15:36:40Z", "updated": "2026-01-27T15:37:04Z", "addresses": {"tempest-ServerActionsTestJSON-232594074-network": [{"version": 4, "addr": "10.100.0.9", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a4:a8:c7"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1606409313", "OS-SRV-USG:launched_at": "2026-01-27T15:37:04.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1458060146"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 15:37:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:13.194 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a used request id req-78e81919-24bf-4051-a5ca-52dfc1ce0bec request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 15:37:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:13.195 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a', 'name': 'tempest-ServerActionsTestJSON-server-1366686872', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '85bd0617549142039dbe55541a8fece5', 'user_id': '37fdc28d88dc42689e835e91aad4c2d3', 'hostId': '1f7300eb44e10075c6cc0cb140aad0f7d6c6c299bdbe0bd07ddb3879', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:37:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:13.198 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b4f95e32-4dde-475f-bf71-8bd9391938a2 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 15:37:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:13.199 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b4f95e32-4dde-475f-bf71-8bd9391938a2 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82c957adbc17ae7d91b95e243ef95edcae050b803dbf40e883e7549d3d32b40a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 15:37:13 compute-0 nova_compute[185191]: 2026-01-27 15:37:13.213 185195 DEBUG oslo_concurrency.lockutils [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:37:13 compute-0 nova_compute[185191]: 2026-01-27 15:37:13.334 185195 INFO nova.scheduler.client.report [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Deleted allocations for instance 93810d43-1793-46e8-871a-11bf4aa9a642
Jan 27 15:37:13 compute-0 nova_compute[185191]: 2026-01-27 15:37:13.649 185195 DEBUG nova.compute.manager [req-d14c5267-bb45-48c4-9b29-6c61c2fe3848 req-c9317209-e56b-49b7-93ad-dcd505cd129f 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Received event network-vif-deleted-56e8ff16-7b0a-4291-b4db-48805117e7f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:37:13 compute-0 nova_compute[185191]: 2026-01-27 15:37:13.843 185195 DEBUG oslo_concurrency.lockutils [None req-a04ed2d1-2bbc-467e-a1cb-a39b92ea80dc 6f5a4b77476d45c798bb724369f7305d ddd814ae0f3b44a3ae49ec57d44dab05 - - default default] Lock "93810d43-1793-46e8-871a-11bf4aa9a642" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:37:13 compute-0 nova_compute[185191]: 2026-01-27 15:37:13.902 185195 DEBUG nova.objects.instance [None req-8f7afd21-fce8-40c2-9666-2c5fb72c2065 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lazy-loading 'flavor' on Instance uuid b4f95e32-4dde-475f-bf71-8bd9391938a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:37:14 compute-0 nova_compute[185191]: 2026-01-27 15:37:14.022 185195 DEBUG oslo_concurrency.lockutils [None req-8f7afd21-fce8-40c2-9666-2c5fb72c2065 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Acquiring lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.079 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2108 Content-Type: application/json Date: Tue, 27 Jan 2026 15:37:13 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-d12c5ac9-80dc-4ede-b737-685cc0c36961 x-openstack-request-id: req-d12c5ac9-80dc-4ede-b737-685cc0c36961 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.080 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b4f95e32-4dde-475f-bf71-8bd9391938a2", "name": "tempest-AttachInterfacesUnderV243Test-server-296491480", "status": "ACTIVE", "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "user_id": "284e9a7227b6494189d43d1f5c7f629f", "metadata": {}, "hostId": "66c8803a5331ae428b2d2c271ae3623e2da9dfef6eb886f3c97a0b02", "image": {"id": "fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87"}]}, "flavor": {"id": "aed09843-3292-40b2-b829-c4ed118e135f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/aed09843-3292-40b2-b829-c4ed118e135f"}]}, "created": "2026-01-27T15:35:37Z", "updated": "2026-01-27T15:37:10Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-210247716-network": [{"version": 4, "addr": "10.100.0.13", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c6:55:96"}, {"version": 4, "addr": "10.100.0.6", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c6:55:96"}, {"version": 4, "addr": "192.168.122.233", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c6:55:96"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b4f95e32-4dde-475f-bf71-8bd9391938a2"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b4f95e32-4dde-475f-bf71-8bd9391938a2"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-811725567", "OS-SRV-USG:launched_at": "2026-01-27T15:35:51.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--913492379"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000006", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.080 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b4f95e32-4dde-475f-bf71-8bd9391938a2 used request id req-d12c5ac9-80dc-4ede-b737-685cc0c36961 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.081 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b4f95e32-4dde-475f-bf71-8bd9391938a2', 'name': 'tempest-AttachInterfacesUnderV243Test-server-296491480', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'de927906c1224ae18edd6fb91a4a7037', 'user_id': '284e9a7227b6494189d43d1f5c7f629f', 'hostId': '66c8803a5331ae428b2d2c271ae3623e2da9dfef6eb886f3c97a0b02', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.081 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:37:15.082286) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.127 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.128 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.175 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.write.latency volume: 4301750907 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.176 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.177 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.177 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.177 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.178 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.178 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.178 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:37:15.178339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.179 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.179 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.write.requests volume: 330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.179 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.180 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.181 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.181 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:37:15.182293) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.195 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.196 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.210 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.211 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.211 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.212 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.212 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.212 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.212 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.213 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.213 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:37:15.213065) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.216 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a / tapc4e14112-ad inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.217 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.220 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b4f95e32-4dde-475f-bf71-8bd9391938a2 / tap33cb1013-47 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.220 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.220 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.221 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.221 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.221 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.221 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.222 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.222 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.223 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.223 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:37:15.222205) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.223 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.223 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.224 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.224 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.224 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.225 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.225 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:37:15.224209) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.225 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.225 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.226 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.226 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.226 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.226 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:37:15.226250) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.227 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.227 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.227 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.228 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.228 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.228 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.228 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.229 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:37:15.228781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.252 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/cpu volume: 11040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.271 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/cpu volume: 35620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.272 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.272 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.273 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.273 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.273 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.274 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.274 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:37:15.273604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.274 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.275 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.275 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.276 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.276 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.276 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:37:15.276269) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.277 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.277 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.278 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.278 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.278 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.278 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.279 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.279 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a: ceilometer.compute.pollsters.NoVolumeException
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:37:15.278620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.279 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/memory.usage volume: 42.86328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.279 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.280 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.280 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.280 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.281 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.281 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.281 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/network.incoming.bytes volume: 4343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:37:15.280976) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.282 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.282 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.282 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.282 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.282 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.282 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.282 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1366686872>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-296491480>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1366686872>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-296491480>]
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.283 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.283 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.283 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.284 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.284 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T15:37:15.282503) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.284 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.284 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.285 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:37:15.284072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.285 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.285 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.286 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.286 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.286 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.286 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.287 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:37:15.286292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.287 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.287 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.287 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.287 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.287 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.287 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.288 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.288 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.289 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.289 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.289 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.289 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.289 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.290 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.290 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.291 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:37:15.287884) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:37:15.289756) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.291 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.291 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.292 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.292 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.292 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.293 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:37:15.292533) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.293 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.293 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.294 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.294 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.294 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.294 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.295 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:37:15.294849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.295 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.295 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.read.bytes volume: 30099968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.300 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.300 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.301 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.301 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.301 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:37:15.301726) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.302 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.303 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.303 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.303 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.304 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.read.latency volume: 3031458274 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.304 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.read.latency volume: 7491301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:37:15.303923) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.304 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.read.latency volume: 1021930712 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.305 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.read.latency volume: 91172523 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.305 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.306 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.306 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.306 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.306 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.307 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.307 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T15:37:15.306890) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.307 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1366686872>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-296491480>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1366686872>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-296491480>]
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.308 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.308 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.308 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.309 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:37:15.308931) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.309 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.309 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.read.requests volume: 1082 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.310 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.310 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.311 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.311 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.311 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.312 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:37:15.311623) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.312 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.312 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.313 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.313 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.314 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.314 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.314 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.314 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:37:15.314481) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.315 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.315 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.316 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.316 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.317 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:37:15.316491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.317 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.317 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.write.bytes volume: 73105408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.318 14 DEBUG ceilometer.compute.pollsters [-] b4f95e32-4dde-475f-bf71-8bd9391938a2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.318 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.322 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.322 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.322 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.322 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.323 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.323 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.323 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.323 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.323 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.324 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.324 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.324 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:37:15.324 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:37:15 compute-0 nova_compute[185191]: 2026-01-27 15:37:15.399 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:15 compute-0 nova_compute[185191]: 2026-01-27 15:37:15.981 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:17 compute-0 nova_compute[185191]: 2026-01-27 15:37:17.078 185195 DEBUG nova.network.neutron [req-4ea40aa4-2a53-4b99-84bc-8f27cbb5075a req-bfc307ac-df62-4b4c-8e3c-2e50211cc0ae 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Updated VIF entry in instance network info cache for port 33cb1013-4786-49f5-a482-721c6aeb907b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:37:17 compute-0 nova_compute[185191]: 2026-01-27 15:37:17.079 185195 DEBUG nova.network.neutron [req-4ea40aa4-2a53-4b99-84bc-8f27cbb5075a req-bfc307ac-df62-4b4c-8e3c-2e50211cc0ae 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Updating instance_info_cache with network_info: [{"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:37:17 compute-0 nova_compute[185191]: 2026-01-27 15:37:17.101 185195 DEBUG oslo_concurrency.lockutils [req-4ea40aa4-2a53-4b99-84bc-8f27cbb5075a req-bfc307ac-df62-4b4c-8e3c-2e50211cc0ae 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:37:17 compute-0 nova_compute[185191]: 2026-01-27 15:37:17.102 185195 DEBUG oslo_concurrency.lockutils [None req-8f7afd21-fce8-40c2-9666-2c5fb72c2065 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Acquired lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:37:18 compute-0 podman[250752]: 2026-01-27 15:37:18.321152844 +0000 UTC m=+0.075149016 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 15:37:20 compute-0 nova_compute[185191]: 2026-01-27 15:37:20.401 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:20 compute-0 nova_compute[185191]: 2026-01-27 15:37:20.983 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:21 compute-0 nova_compute[185191]: 2026-01-27 15:37:21.657 185195 DEBUG nova.network.neutron [None req-8f7afd21-fce8-40c2-9666-2c5fb72c2065 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:37:21 compute-0 nova_compute[185191]: 2026-01-27 15:37:21.978 185195 DEBUG nova.compute.manager [req-db3f9743-612f-4b7d-8c64-f6d0d4211f8a req-287039cf-53cf-4502-8508-595541d4b44a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Received event network-changed-33cb1013-4786-49f5-a482-721c6aeb907b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:37:21 compute-0 nova_compute[185191]: 2026-01-27 15:37:21.978 185195 DEBUG nova.compute.manager [req-db3f9743-612f-4b7d-8c64-f6d0d4211f8a req-287039cf-53cf-4502-8508-595541d4b44a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Refreshing instance network info cache due to event network-changed-33cb1013-4786-49f5-a482-721c6aeb907b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:37:21 compute-0 nova_compute[185191]: 2026-01-27 15:37:21.979 185195 DEBUG oslo_concurrency.lockutils [req-db3f9743-612f-4b7d-8c64-f6d0d4211f8a req-287039cf-53cf-4502-8508-595541d4b44a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:37:22 compute-0 podman[250774]: 2026-01-27 15:37:22.310877496 +0000 UTC m=+0.060460142 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:37:22 compute-0 podman[250773]: 2026-01-27 15:37:22.320294959 +0000 UTC m=+0.072195437 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.29.0, config_id=kepler, maintainer=Red Hat, Inc., name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 27 15:37:24 compute-0 nova_compute[185191]: 2026-01-27 15:37:24.831 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:25 compute-0 nova_compute[185191]: 2026-01-27 15:37:25.322 185195 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769528230.3187993, 93810d43-1793-46e8-871a-11bf4aa9a642 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:37:25 compute-0 nova_compute[185191]: 2026-01-27 15:37:25.322 185195 INFO nova.compute.manager [-] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] VM Stopped (Lifecycle Event)
Jan 27 15:37:25 compute-0 nova_compute[185191]: 2026-01-27 15:37:25.378 185195 DEBUG nova.compute.manager [None req-0ad03e8a-bc5a-4cd1-8102-2d549d426d28 - - - - - -] [instance: 93810d43-1793-46e8-871a-11bf4aa9a642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:37:25 compute-0 nova_compute[185191]: 2026-01-27 15:37:25.404 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:25 compute-0 nova_compute[185191]: 2026-01-27 15:37:25.985 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:27 compute-0 podman[250813]: 2026-01-27 15:37:27.303001599 +0000 UTC m=+0.054846832 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:37:27 compute-0 nova_compute[185191]: 2026-01-27 15:37:27.624 185195 DEBUG nova.network.neutron [None req-8f7afd21-fce8-40c2-9666-2c5fb72c2065 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Updating instance_info_cache with network_info: [{"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:37:28 compute-0 nova_compute[185191]: 2026-01-27 15:37:28.048 185195 DEBUG oslo_concurrency.lockutils [None req-8f7afd21-fce8-40c2-9666-2c5fb72c2065 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Releasing lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:37:28 compute-0 nova_compute[185191]: 2026-01-27 15:37:28.049 185195 DEBUG nova.compute.manager [None req-8f7afd21-fce8-40c2-9666-2c5fb72c2065 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 27 15:37:28 compute-0 nova_compute[185191]: 2026-01-27 15:37:28.049 185195 DEBUG nova.compute.manager [None req-8f7afd21-fce8-40c2-9666-2c5fb72c2065 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] network_info to inject: |[{"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 27 15:37:28 compute-0 nova_compute[185191]: 2026-01-27 15:37:28.052 185195 DEBUG oslo_concurrency.lockutils [req-db3f9743-612f-4b7d-8c64-f6d0d4211f8a req-287039cf-53cf-4502-8508-595541d4b44a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:37:28 compute-0 nova_compute[185191]: 2026-01-27 15:37:28.052 185195 DEBUG nova.network.neutron [req-db3f9743-612f-4b7d-8c64-f6d0d4211f8a req-287039cf-53cf-4502-8508-595541d4b44a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Refreshing network info cache for port 33cb1013-4786-49f5-a482-721c6aeb907b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.539 185195 DEBUG oslo_concurrency.lockutils [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Acquiring lock "b4f95e32-4dde-475f-bf71-8bd9391938a2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.540 185195 DEBUG oslo_concurrency.lockutils [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "b4f95e32-4dde-475f-bf71-8bd9391938a2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.540 185195 DEBUG oslo_concurrency.lockutils [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Acquiring lock "b4f95e32-4dde-475f-bf71-8bd9391938a2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.541 185195 DEBUG oslo_concurrency.lockutils [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "b4f95e32-4dde-475f-bf71-8bd9391938a2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.541 185195 DEBUG oslo_concurrency.lockutils [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "b4f95e32-4dde-475f-bf71-8bd9391938a2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.542 185195 INFO nova.compute.manager [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Terminating instance
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.543 185195 DEBUG nova.compute.manager [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 15:37:29 compute-0 kernel: tap33cb1013-47 (unregistering): left promiscuous mode
Jan 27 15:37:29 compute-0 NetworkManager[56090]: <info>  [1769528249.6469] device (tap33cb1013-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.668 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:29 compute-0 ovn_controller[97541]: 2026-01-27T15:37:29Z|00116|binding|INFO|Releasing lport 33cb1013-4786-49f5-a482-721c6aeb907b from this chassis (sb_readonly=0)
Jan 27 15:37:29 compute-0 ovn_controller[97541]: 2026-01-27T15:37:29Z|00117|binding|INFO|Setting lport 33cb1013-4786-49f5-a482-721c6aeb907b down in Southbound
Jan 27 15:37:29 compute-0 ovn_controller[97541]: 2026-01-27T15:37:29Z|00118|binding|INFO|Removing iface tap33cb1013-47 ovn-installed in OVS
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.699 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.704 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:29 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 27 15:37:29 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 44.292s CPU time.
Jan 27 15:37:29 compute-0 systemd-machined[156506]: Machine qemu-6-instance-00000006 terminated.
Jan 27 15:37:29 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:29.725 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:55:96 10.100.0.6'], port_security=['fa:16:3e:c6:55:96 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b4f95e32-4dde-475f-bf71-8bd9391938a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd9a5530-7d18-48b0-bbd7-21f4f3192fce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de927906c1224ae18edd6fb91a4a7037', 'neutron:revision_number': '6', 'neutron:security_group_ids': '63f3558f-ca7e-495f-bdf5-2d3d1950848a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64c139da-9754-4fed-b000-e06e325bc6ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=33cb1013-4786-49f5-a482-721c6aeb907b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:37:29 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:29.726 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 33cb1013-4786-49f5-a482-721c6aeb907b in datapath dd9a5530-7d18-48b0-bbd7-21f4f3192fce unbound from our chassis
Jan 27 15:37:29 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:29.728 106793 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dd9a5530-7d18-48b0-bbd7-21f4f3192fce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 15:37:29 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:29.729 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c445ce3c-14e4-42f5-bcf4-c397cba9feec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:29 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:29.729 106793 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce namespace which is not needed anymore
Jan 27 15:37:29 compute-0 podman[201073]: time="2026-01-27T15:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:37:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 27 15:37:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4850 "" "Go-http-client/1.1"
Jan 27 15:37:29 compute-0 kernel: tap33cb1013-47: entered promiscuous mode
Jan 27 15:37:29 compute-0 kernel: tap33cb1013-47 (unregistering): left promiscuous mode
Jan 27 15:37:29 compute-0 NetworkManager[56090]: <info>  [1769528249.7774] manager: (tap33cb1013-47): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.781 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.819 185195 INFO nova.virt.libvirt.driver [-] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Instance destroyed successfully.
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.819 185195 DEBUG nova.objects.instance [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lazy-loading 'resources' on Instance uuid b4f95e32-4dde-475f-bf71-8bd9391938a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.840 185195 DEBUG nova.virt.libvirt.vif [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-296491480',display_name='tempest-AttachInterfacesUnderV243Test-server-296491480',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-296491480',id=6,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMruRPVmIxyxJpLw1XteWxArIgcJ2nS0hQhNn3b2y9hdAlw+pR6sm2cPZ97Rely9ERzVsR/GKvqv4AG8086R3E12n5VkwDtAMg2Wmwzi0BPUMEmi7C5mquhLTMNiji6WQQ==',key_name='tempest-keypair-811725567',keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:35:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de927906c1224ae18edd6fb91a4a7037',ramdisk_id='',reservation_id='r-8avql300',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1149926905',owner_user_name='tempest-AttachInterfacesUnderV243Test-1149926905-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T15:37:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='284e9a7227b6494189d43d1f5c7f629f',uuid=b4f95e32-4dde-475f-bf71-8bd9391938a2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.841 185195 DEBUG nova.network.os_vif_util [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Converting VIF {"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.841 185195 DEBUG nova.network.os_vif_util [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:55:96,bridge_name='br-int',has_traffic_filtering=True,id=33cb1013-4786-49f5-a482-721c6aeb907b,network=Network(dd9a5530-7d18-48b0-bbd7-21f4f3192fce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33cb1013-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.842 185195 DEBUG os_vif [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:55:96,bridge_name='br-int',has_traffic_filtering=True,id=33cb1013-4786-49f5-a482-721c6aeb907b,network=Network(dd9a5530-7d18-48b0-bbd7-21f4f3192fce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33cb1013-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.843 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.844 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33cb1013-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.845 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.847 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.849 185195 INFO os_vif [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:55:96,bridge_name='br-int',has_traffic_filtering=True,id=33cb1013-4786-49f5-a482-721c6aeb907b,network=Network(dd9a5530-7d18-48b0-bbd7-21f4f3192fce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33cb1013-47')
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.850 185195 INFO nova.virt.libvirt.driver [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Deleting instance files /var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2_del
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.851 185195 INFO nova.virt.libvirt.driver [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Deletion of /var/lib/nova/instances/b4f95e32-4dde-475f-bf71-8bd9391938a2_del complete
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.938 185195 INFO nova.compute.manager [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Took 0.39 seconds to destroy the instance on the hypervisor.
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.939 185195 DEBUG oslo.service.loopingcall [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.939 185195 DEBUG nova.compute.manager [-] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 15:37:29 compute-0 nova_compute[185191]: 2026-01-27 15:37:29.940 185195 DEBUG nova.network.neutron [-] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 15:37:29 compute-0 neutron-haproxy-ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce[249504]: [NOTICE]   (249509) : haproxy version is 2.8.14-c23fe91
Jan 27 15:37:29 compute-0 neutron-haproxy-ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce[249504]: [NOTICE]   (249509) : path to executable is /usr/sbin/haproxy
Jan 27 15:37:29 compute-0 neutron-haproxy-ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce[249504]: [ALERT]    (249509) : Current worker (249513) exited with code 143 (Terminated)
Jan 27 15:37:29 compute-0 neutron-haproxy-ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce[249504]: [WARNING]  (249509) : All workers exited. Exiting... (0)
Jan 27 15:37:29 compute-0 systemd[1]: libpod-a48deca99d55c8c44c6cdbfe94399b366a94463fa9f3738e05894a9191091ea4.scope: Deactivated successfully.
Jan 27 15:37:29 compute-0 podman[250875]: 2026-01-27 15:37:29.954526164 +0000 UTC m=+0.106404154 container died a48deca99d55c8c44c6cdbfe94399b366a94463fa9f3738e05894a9191091ea4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:37:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a48deca99d55c8c44c6cdbfe94399b366a94463fa9f3738e05894a9191091ea4-userdata-shm.mount: Deactivated successfully.
Jan 27 15:37:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-142169cacc7ae401695b79e524a198c75931d193588ec653c56a9fb187ee11d9-merged.mount: Deactivated successfully.
Jan 27 15:37:30 compute-0 podman[250875]: 2026-01-27 15:37:30.164914526 +0000 UTC m=+0.316792516 container cleanup a48deca99d55c8c44c6cdbfe94399b366a94463fa9f3738e05894a9191091ea4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 15:37:30 compute-0 systemd[1]: libpod-conmon-a48deca99d55c8c44c6cdbfe94399b366a94463fa9f3738e05894a9191091ea4.scope: Deactivated successfully.
Jan 27 15:37:30 compute-0 podman[250903]: 2026-01-27 15:37:30.319190803 +0000 UTC m=+0.129022301 container remove a48deca99d55c8c44c6cdbfe94399b366a94463fa9f3738e05894a9191091ea4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:37:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:30.327 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[738357c1-b767-4817-be7f-38a93a4df372]: (4, ('Tue Jan 27 03:37:29 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce (a48deca99d55c8c44c6cdbfe94399b366a94463fa9f3738e05894a9191091ea4)\na48deca99d55c8c44c6cdbfe94399b366a94463fa9f3738e05894a9191091ea4\nTue Jan 27 03:37:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce (a48deca99d55c8c44c6cdbfe94399b366a94463fa9f3738e05894a9191091ea4)\na48deca99d55c8c44c6cdbfe94399b366a94463fa9f3738e05894a9191091ea4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:30.329 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[39690de8-248c-4280-a181-13b9858e55f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:30.330 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd9a5530-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:37:30 compute-0 nova_compute[185191]: 2026-01-27 15:37:30.332 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:30 compute-0 kernel: tapdd9a5530-70: left promiscuous mode
Jan 27 15:37:30 compute-0 nova_compute[185191]: 2026-01-27 15:37:30.335 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:30.342 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f791f751-5c35-44ed-9861-57d858d54d5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:30 compute-0 nova_compute[185191]: 2026-01-27 15:37:30.351 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:30.361 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f702afcd-f006-4bbe-a7c4-9dcf3ff07685]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:30.362 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e6e85a3a-22ac-43f9-bd94-3c7bfabafda4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:30.378 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[cf33e7cb-708d-4673-af0d-1a14697904ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572544, 'reachable_time': 16847, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250918, 'error': None, 'target': 'ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:30 compute-0 systemd[1]: run-netns-ovnmeta\x2ddd9a5530\x2d7d18\x2d48b0\x2dbbd7\x2d21f4f3192fce.mount: Deactivated successfully.
Jan 27 15:37:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:30.382 107308 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dd9a5530-7d18-48b0-bbd7-21f4f3192fce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 15:37:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:37:30.382 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9d200b-6b22-4bfb-84fd-2e9acfee9622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:37:30 compute-0 nova_compute[185191]: 2026-01-27 15:37:30.987 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:31 compute-0 openstack_network_exporter[204239]: ERROR   15:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:37:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:37:31 compute-0 openstack_network_exporter[204239]: ERROR   15:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:37:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:37:31 compute-0 nova_compute[185191]: 2026-01-27 15:37:31.985 185195 DEBUG nova.network.neutron [-] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:37:32 compute-0 nova_compute[185191]: 2026-01-27 15:37:32.120 185195 INFO nova.compute.manager [-] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Took 2.18 seconds to deallocate network for instance.
Jan 27 15:37:32 compute-0 ovn_controller[97541]: 2026-01-27T15:37:32Z|00119|binding|INFO|Releasing lport 6ae5c324-742b-43ab-97cd-a7094add5cfb from this chassis (sb_readonly=0)
Jan 27 15:37:32 compute-0 nova_compute[185191]: 2026-01-27 15:37:32.125 185195 DEBUG nova.compute.manager [req-334fd4f5-c7ca-4ae0-ba29-07fda5a34bc4 req-28a8106a-75b2-4e5b-b9cc-897a83e3f410 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Received event network-vif-deleted-33cb1013-4786-49f5-a482-721c6aeb907b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:37:32 compute-0 nova_compute[185191]: 2026-01-27 15:37:32.126 185195 INFO nova.compute.manager [req-334fd4f5-c7ca-4ae0-ba29-07fda5a34bc4 req-28a8106a-75b2-4e5b-b9cc-897a83e3f410 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Neutron deleted interface 33cb1013-4786-49f5-a482-721c6aeb907b; detaching it from the instance and deleting it from the info cache
Jan 27 15:37:32 compute-0 nova_compute[185191]: 2026-01-27 15:37:32.126 185195 DEBUG nova.network.neutron [req-334fd4f5-c7ca-4ae0-ba29-07fda5a34bc4 req-28a8106a-75b2-4e5b-b9cc-897a83e3f410 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:37:32 compute-0 nova_compute[185191]: 2026-01-27 15:37:32.183 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:32 compute-0 nova_compute[185191]: 2026-01-27 15:37:32.244 185195 DEBUG nova.compute.manager [req-334fd4f5-c7ca-4ae0-ba29-07fda5a34bc4 req-28a8106a-75b2-4e5b-b9cc-897a83e3f410 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Detach interface failed, port_id=33cb1013-4786-49f5-a482-721c6aeb907b, reason: Instance b4f95e32-4dde-475f-bf71-8bd9391938a2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 15:37:32 compute-0 nova_compute[185191]: 2026-01-27 15:37:32.294 185195 DEBUG oslo_concurrency.lockutils [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:37:32 compute-0 nova_compute[185191]: 2026-01-27 15:37:32.295 185195 DEBUG oslo_concurrency.lockutils [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:37:32 compute-0 nova_compute[185191]: 2026-01-27 15:37:32.437 185195 DEBUG nova.compute.provider_tree [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:37:32 compute-0 nova_compute[185191]: 2026-01-27 15:37:32.488 185195 DEBUG nova.scheduler.client.report [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:37:32 compute-0 nova_compute[185191]: 2026-01-27 15:37:32.608 185195 DEBUG oslo_concurrency.lockutils [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:37:32 compute-0 nova_compute[185191]: 2026-01-27 15:37:32.676 185195 INFO nova.scheduler.client.report [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Deleted allocations for instance b4f95e32-4dde-475f-bf71-8bd9391938a2
Jan 27 15:37:32 compute-0 nova_compute[185191]: 2026-01-27 15:37:32.802 185195 DEBUG oslo_concurrency.lockutils [None req-a61bbff5-10a2-4ae0-a4a3-11292984ad96 284e9a7227b6494189d43d1f5c7f629f de927906c1224ae18edd6fb91a4a7037 - - default default] Lock "b4f95e32-4dde-475f-bf71-8bd9391938a2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:37:32 compute-0 nova_compute[185191]: 2026-01-27 15:37:32.968 185195 DEBUG nova.network.neutron [req-db3f9743-612f-4b7d-8c64-f6d0d4211f8a req-287039cf-53cf-4502-8508-595541d4b44a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Updated VIF entry in instance network info cache for port 33cb1013-4786-49f5-a482-721c6aeb907b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:37:32 compute-0 nova_compute[185191]: 2026-01-27 15:37:32.969 185195 DEBUG nova.network.neutron [req-db3f9743-612f-4b7d-8c64-f6d0d4211f8a req-287039cf-53cf-4502-8508-595541d4b44a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Updating instance_info_cache with network_info: [{"id": "33cb1013-4786-49f5-a482-721c6aeb907b", "address": "fa:16:3e:c6:55:96", "network": {"id": "dd9a5530-7d18-48b0-bbd7-21f4f3192fce", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-210247716-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de927906c1224ae18edd6fb91a4a7037", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33cb1013-47", "ovs_interfaceid": "33cb1013-4786-49f5-a482-721c6aeb907b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:37:33 compute-0 nova_compute[185191]: 2026-01-27 15:37:33.022 185195 DEBUG oslo_concurrency.lockutils [req-db3f9743-612f-4b7d-8c64-f6d0d4211f8a req-287039cf-53cf-4502-8508-595541d4b44a 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-b4f95e32-4dde-475f-bf71-8bd9391938a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:37:34 compute-0 nova_compute[185191]: 2026-01-27 15:37:34.847 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:35 compute-0 nova_compute[185191]: 2026-01-27 15:37:35.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:37:35 compute-0 nova_compute[185191]: 2026-01-27 15:37:35.990 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:35 compute-0 nova_compute[185191]: 2026-01-27 15:37:35.994 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:37:35 compute-0 nova_compute[185191]: 2026-01-27 15:37:35.995 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:37:35 compute-0 nova_compute[185191]: 2026-01-27 15:37:35.995 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:37:35 compute-0 nova_compute[185191]: 2026-01-27 15:37:35.995 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:37:36 compute-0 nova_compute[185191]: 2026-01-27 15:37:36.149 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:37:36 compute-0 nova_compute[185191]: 2026-01-27 15:37:36.211 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:37:36 compute-0 nova_compute[185191]: 2026-01-27 15:37:36.212 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:37:36 compute-0 nova_compute[185191]: 2026-01-27 15:37:36.292 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:37:36 compute-0 nova_compute[185191]: 2026-01-27 15:37:36.632 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:37:36 compute-0 nova_compute[185191]: 2026-01-27 15:37:36.634 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5143MB free_disk=72.37713623046875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:37:36 compute-0 nova_compute[185191]: 2026-01-27 15:37:36.634 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:37:36 compute-0 nova_compute[185191]: 2026-01-27 15:37:36.635 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:37:36 compute-0 nova_compute[185191]: 2026-01-27 15:37:36.713 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:37:36 compute-0 nova_compute[185191]: 2026-01-27 15:37:36.714 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:37:36 compute-0 nova_compute[185191]: 2026-01-27 15:37:36.714 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:37:36 compute-0 nova_compute[185191]: 2026-01-27 15:37:36.768 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:37:36 compute-0 nova_compute[185191]: 2026-01-27 15:37:36.786 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:37:36 compute-0 nova_compute[185191]: 2026-01-27 15:37:36.812 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:37:36 compute-0 nova_compute[185191]: 2026-01-27 15:37:36.812 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:37:39 compute-0 podman[250927]: 2026-01-27 15:37:39.335274447 +0000 UTC m=+0.080408238 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 15:37:39 compute-0 nova_compute[185191]: 2026-01-27 15:37:39.849 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:40 compute-0 nova_compute[185191]: 2026-01-27 15:37:40.992 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:41 compute-0 podman[250959]: 2026-01-27 15:37:41.328821477 +0000 UTC m=+0.074829928 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64)
Jan 27 15:37:41 compute-0 podman[250957]: 2026-01-27 15:37:41.350081007 +0000 UTC m=+0.100693121 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 15:37:41 compute-0 podman[250958]: 2026-01-27 15:37:41.384638314 +0000 UTC m=+0.134573510 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 15:37:41 compute-0 ovn_controller[97541]: 2026-01-27T15:37:41Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a4:a8:c7 10.100.0.9
Jan 27 15:37:41 compute-0 ovn_controller[97541]: 2026-01-27T15:37:41Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:a8:c7 10.100.0.9
Jan 27 15:37:42 compute-0 nova_compute[185191]: 2026-01-27 15:37:42.550 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:43 compute-0 nova_compute[185191]: 2026-01-27 15:37:43.813 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:37:44 compute-0 nova_compute[185191]: 2026-01-27 15:37:44.817 185195 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769528249.815495, b4f95e32-4dde-475f-bf71-8bd9391938a2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:37:44 compute-0 nova_compute[185191]: 2026-01-27 15:37:44.817 185195 INFO nova.compute.manager [-] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] VM Stopped (Lifecycle Event)
Jan 27 15:37:44 compute-0 nova_compute[185191]: 2026-01-27 15:37:44.854 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:44 compute-0 nova_compute[185191]: 2026-01-27 15:37:44.865 185195 DEBUG nova.compute.manager [None req-46d29f5e-875f-413e-8721-cac29d3b189b - - - - - -] [instance: b4f95e32-4dde-475f-bf71-8bd9391938a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:37:44 compute-0 nova_compute[185191]: 2026-01-27 15:37:44.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:37:45 compute-0 nova_compute[185191]: 2026-01-27 15:37:45.994 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:46 compute-0 nova_compute[185191]: 2026-01-27 15:37:46.597 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:46 compute-0 nova_compute[185191]: 2026-01-27 15:37:46.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:37:47 compute-0 nova_compute[185191]: 2026-01-27 15:37:47.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:37:49 compute-0 podman[251018]: 2026-01-27 15:37:49.309098962 +0000 UTC m=+0.064402338 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 15:37:49 compute-0 nova_compute[185191]: 2026-01-27 15:37:49.857 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:49 compute-0 nova_compute[185191]: 2026-01-27 15:37:49.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:37:49 compute-0 nova_compute[185191]: 2026-01-27 15:37:49.943 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:37:50 compute-0 nova_compute[185191]: 2026-01-27 15:37:50.997 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:51 compute-0 ovn_controller[97541]: 2026-01-27T15:37:51Z|00120|binding|INFO|Releasing lport 6ae5c324-742b-43ab-97cd-a7094add5cfb from this chassis (sb_readonly=0)
Jan 27 15:37:51 compute-0 nova_compute[185191]: 2026-01-27 15:37:51.100 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:51 compute-0 nova_compute[185191]: 2026-01-27 15:37:51.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:37:51 compute-0 nova_compute[185191]: 2026-01-27 15:37:51.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:37:51 compute-0 nova_compute[185191]: 2026-01-27 15:37:51.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:37:52 compute-0 nova_compute[185191]: 2026-01-27 15:37:52.531 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:37:52 compute-0 nova_compute[185191]: 2026-01-27 15:37:52.532 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:37:52 compute-0 nova_compute[185191]: 2026-01-27 15:37:52.532 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:37:52 compute-0 nova_compute[185191]: 2026-01-27 15:37:52.533 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:37:53 compute-0 podman[251043]: 2026-01-27 15:37:53.301852656 +0000 UTC m=+0.056245580 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:37:53 compute-0 podman[251042]: 2026-01-27 15:37:53.351540618 +0000 UTC m=+0.108607424 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, release-0.7.12=, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:37:53 compute-0 ovn_controller[97541]: 2026-01-27T15:37:53Z|00121|binding|INFO|Releasing lport 6ae5c324-742b-43ab-97cd-a7094add5cfb from this chassis (sb_readonly=0)
Jan 27 15:37:53 compute-0 nova_compute[185191]: 2026-01-27 15:37:53.593 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:54 compute-0 nova_compute[185191]: 2026-01-27 15:37:54.861 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:55 compute-0 nova_compute[185191]: 2026-01-27 15:37:55.999 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:56 compute-0 nova_compute[185191]: 2026-01-27 15:37:56.513 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Updating instance_info_cache with network_info: [{"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:37:56 compute-0 nova_compute[185191]: 2026-01-27 15:37:56.556 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:37:56 compute-0 nova_compute[185191]: 2026-01-27 15:37:56.557 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:37:56 compute-0 nova_compute[185191]: 2026-01-27 15:37:56.557 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:37:57 compute-0 nova_compute[185191]: 2026-01-27 15:37:57.758 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:58 compute-0 podman[251087]: 2026-01-27 15:37:58.328462724 +0000 UTC m=+0.077480518 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:37:59 compute-0 podman[201073]: time="2026-01-27T15:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:37:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:37:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 27 15:37:59 compute-0 nova_compute[185191]: 2026-01-27 15:37:59.865 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:37:59 compute-0 nova_compute[185191]: 2026-01-27 15:37:59.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:38:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:00.258 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:00.259 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:00.259 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:01 compute-0 nova_compute[185191]: 2026-01-27 15:38:01.001 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:01 compute-0 openstack_network_exporter[204239]: ERROR   15:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:38:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:38:01 compute-0 openstack_network_exporter[204239]: ERROR   15:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:38:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:38:04 compute-0 ovn_controller[97541]: 2026-01-27T15:38:04Z|00122|binding|INFO|Releasing lport 6ae5c324-742b-43ab-97cd-a7094add5cfb from this chassis (sb_readonly=0)
Jan 27 15:38:04 compute-0 nova_compute[185191]: 2026-01-27 15:38:04.194 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:04 compute-0 nova_compute[185191]: 2026-01-27 15:38:04.867 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:06 compute-0 nova_compute[185191]: 2026-01-27 15:38:06.003 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:08 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:08.759 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:38:08 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:08.760 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:38:08 compute-0 nova_compute[185191]: 2026-01-27 15:38:08.762 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:09 compute-0 nova_compute[185191]: 2026-01-27 15:38:09.869 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:10 compute-0 podman[251109]: 2026-01-27 15:38:10.353086073 +0000 UTC m=+0.103063794 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 15:38:10 compute-0 nova_compute[185191]: 2026-01-27 15:38:10.556 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:11 compute-0 nova_compute[185191]: 2026-01-27 15:38:11.006 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:12 compute-0 podman[251130]: 2026-01-27 15:38:12.315951091 +0000 UTC m=+0.069510955 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, release=1755695350, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_id=openstack_network_exporter)
Jan 27 15:38:12 compute-0 podman[251128]: 2026-01-27 15:38:12.317609026 +0000 UTC m=+0.078264140 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 15:38:12 compute-0 podman[251129]: 2026-01-27 15:38:12.347138548 +0000 UTC m=+0.103433085 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:38:14 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:14.762 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:38:14 compute-0 nova_compute[185191]: 2026-01-27 15:38:14.872 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:16 compute-0 nova_compute[185191]: 2026-01-27 15:38:16.010 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:17 compute-0 nova_compute[185191]: 2026-01-27 15:38:17.406 185195 DEBUG oslo_concurrency.lockutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Acquiring lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:17 compute-0 nova_compute[185191]: 2026-01-27 15:38:17.407 185195 DEBUG oslo_concurrency.lockutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:17 compute-0 nova_compute[185191]: 2026-01-27 15:38:17.407 185195 INFO nova.compute.manager [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Rebooting instance
Jan 27 15:38:17 compute-0 nova_compute[185191]: 2026-01-27 15:38:17.437 185195 DEBUG oslo_concurrency.lockutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Acquiring lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:38:17 compute-0 nova_compute[185191]: 2026-01-27 15:38:17.438 185195 DEBUG oslo_concurrency.lockutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Acquired lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:38:17 compute-0 nova_compute[185191]: 2026-01-27 15:38:17.438 185195 DEBUG nova.network.neutron [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:38:17 compute-0 ovn_controller[97541]: 2026-01-27T15:38:17Z|00123|binding|INFO|Releasing lport 6ae5c324-742b-43ab-97cd-a7094add5cfb from this chassis (sb_readonly=0)
Jan 27 15:38:17 compute-0 nova_compute[185191]: 2026-01-27 15:38:17.640 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:17 compute-0 nova_compute[185191]: 2026-01-27 15:38:17.750 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:19 compute-0 nova_compute[185191]: 2026-01-27 15:38:19.876 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:20 compute-0 podman[251193]: 2026-01-27 15:38:20.313627512 +0000 UTC m=+0.066131464 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true)
Jan 27 15:38:20 compute-0 nova_compute[185191]: 2026-01-27 15:38:20.580 185195 DEBUG nova.network.neutron [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Updating instance_info_cache with network_info: [{"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:38:20 compute-0 nova_compute[185191]: 2026-01-27 15:38:20.607 185195 DEBUG oslo_concurrency.lockutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Releasing lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:38:20 compute-0 nova_compute[185191]: 2026-01-27 15:38:20.608 185195 DEBUG nova.compute.manager [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:38:20 compute-0 kernel: tapc4e14112-ad (unregistering): left promiscuous mode
Jan 27 15:38:20 compute-0 NetworkManager[56090]: <info>  [1769528300.7557] device (tapc4e14112-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 15:38:20 compute-0 ovn_controller[97541]: 2026-01-27T15:38:20Z|00124|binding|INFO|Releasing lport c4e14112-ad85-4d49-92a0-fa577e5760f3 from this chassis (sb_readonly=0)
Jan 27 15:38:20 compute-0 ovn_controller[97541]: 2026-01-27T15:38:20Z|00125|binding|INFO|Setting lport c4e14112-ad85-4d49-92a0-fa577e5760f3 down in Southbound
Jan 27 15:38:20 compute-0 nova_compute[185191]: 2026-01-27 15:38:20.767 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:20 compute-0 ovn_controller[97541]: 2026-01-27T15:38:20Z|00126|binding|INFO|Removing iface tapc4e14112-ad ovn-installed in OVS
Jan 27 15:38:20 compute-0 nova_compute[185191]: 2026-01-27 15:38:20.770 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:20 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:20.775 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:a8:c7 10.100.0.9'], port_security=['fa:16:3e:a4:a8:c7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48bde8d1-e906-4909-996e-97d5280dcfb1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '85bd0617549142039dbe55541a8fece5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82dd7f40-eb0d-42c8-9980-11f2bbab4495', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdd63418-506a-4397-9a84-8a1d6706b561, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=c4e14112-ad85-4d49-92a0-fa577e5760f3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:38:20 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:20.777 106793 INFO neutron.agent.ovn.metadata.agent [-] Port c4e14112-ad85-4d49-92a0-fa577e5760f3 in datapath 48bde8d1-e906-4909-996e-97d5280dcfb1 unbound from our chassis
Jan 27 15:38:20 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:20.778 106793 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48bde8d1-e906-4909-996e-97d5280dcfb1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 15:38:20 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:20.779 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7d8eb4-087d-4123-88c0-b68c8e0308c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:20 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:20.780 106793 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1 namespace which is not needed anymore
Jan 27 15:38:20 compute-0 nova_compute[185191]: 2026-01-27 15:38:20.786 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:20 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 27 15:38:20 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 44.951s CPU time.
Jan 27 15:38:20 compute-0 systemd-machined[156506]: Machine qemu-9-instance-00000009 terminated.
Jan 27 15:38:20 compute-0 neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1[250415]: [NOTICE]   (250419) : haproxy version is 2.8.14-c23fe91
Jan 27 15:38:20 compute-0 neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1[250415]: [NOTICE]   (250419) : path to executable is /usr/sbin/haproxy
Jan 27 15:38:20 compute-0 neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1[250415]: [WARNING]  (250419) : Exiting Master process...
Jan 27 15:38:20 compute-0 neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1[250415]: [ALERT]    (250419) : Current worker (250421) exited with code 143 (Terminated)
Jan 27 15:38:20 compute-0 neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1[250415]: [WARNING]  (250419) : All workers exited. Exiting... (0)
Jan 27 15:38:20 compute-0 systemd[1]: libpod-dfac8daa978187a910e84200062b41f9cf925ce0912928796db59329d0658a74.scope: Deactivated successfully.
Jan 27 15:38:20 compute-0 podman[251237]: 2026-01-27 15:38:20.943400621 +0000 UTC m=+0.067566753 container died dfac8daa978187a910e84200062b41f9cf925ce0912928796db59329d0658a74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 15:38:20 compute-0 nova_compute[185191]: 2026-01-27 15:38:20.950 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:20 compute-0 nova_compute[185191]: 2026-01-27 15:38:20.956 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:20 compute-0 nova_compute[185191]: 2026-01-27 15:38:20.992 185195 INFO nova.virt.libvirt.driver [-] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Instance destroyed successfully.
Jan 27 15:38:20 compute-0 nova_compute[185191]: 2026-01-27 15:38:20.994 185195 DEBUG nova.objects.instance [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lazy-loading 'resources' on Instance uuid 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:38:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dfac8daa978187a910e84200062b41f9cf925ce0912928796db59329d0658a74-userdata-shm.mount: Deactivated successfully.
Jan 27 15:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3d8b295d5c0224da8e38e43036a1a8fb18b214a097fd36a8018539e55069a71-merged.mount: Deactivated successfully.
Jan 27 15:38:21 compute-0 podman[251237]: 2026-01-27 15:38:21.007992523 +0000 UTC m=+0.132158635 container cleanup dfac8daa978187a910e84200062b41f9cf925ce0912928796db59329d0658a74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.011 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.016 185195 DEBUG nova.virt.libvirt.vif [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:36:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1366686872',display_name='tempest-ServerActionsTestJSON-server-1366686872',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1366686872',id=9,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOOG6+xlzVgXR460fH4wCBDfSZ+Bqzod+T+TwINETdjxfNX82OuoN42CwFP5m4Wq/GmFxEISV/cN9fFUJXMVe/yMQysH0bvTBYF3s0nsMzz6e7cmVx9K1BA1d07EqkEl/g==',key_name='tempest-keypair-1606409313',keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:37:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='85bd0617549142039dbe55541a8fece5',ramdisk_id='',reservation_id='r-fqvroo3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1260809908',owner_user_name='tempest-ServerActionsTestJSON-1260809908-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T15:38:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37fdc28d88dc42689e835e91aad4c2d3',uuid=2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.017 185195 DEBUG nova.network.os_vif_util [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Converting VIF {"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.018 185195 DEBUG nova.network.os_vif_util [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:a8:c7,bridge_name='br-int',has_traffic_filtering=True,id=c4e14112-ad85-4d49-92a0-fa577e5760f3,network=Network(48bde8d1-e906-4909-996e-97d5280dcfb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e14112-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.018 185195 DEBUG os_vif [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:a8:c7,bridge_name='br-int',has_traffic_filtering=True,id=c4e14112-ad85-4d49-92a0-fa577e5760f3,network=Network(48bde8d1-e906-4909-996e-97d5280dcfb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e14112-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.020 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.021 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4e14112-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.025 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:38:21 compute-0 systemd[1]: libpod-conmon-dfac8daa978187a910e84200062b41f9cf925ce0912928796db59329d0658a74.scope: Deactivated successfully.
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.029 185195 INFO os_vif [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:a8:c7,bridge_name='br-int',has_traffic_filtering=True,id=c4e14112-ad85-4d49-92a0-fa577e5760f3,network=Network(48bde8d1-e906-4909-996e-97d5280dcfb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e14112-ad')
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.038 185195 DEBUG nova.virt.libvirt.driver [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Start _get_guest_xml network_info=[{"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': 'fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.045 185195 WARNING nova.virt.libvirt.driver [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.059 185195 DEBUG nova.virt.libvirt.host [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.061 185195 DEBUG nova.virt.libvirt.host [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.077 185195 DEBUG nova.virt.libvirt.host [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.078 185195 DEBUG nova.virt.libvirt.host [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.078 185195 DEBUG nova.virt.libvirt.driver [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.079 185195 DEBUG nova.virt.hardware [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T15:34:18Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='aed09843-3292-40b2-b829-c4ed118e135f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.079 185195 DEBUG nova.virt.hardware [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.080 185195 DEBUG nova.virt.hardware [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.080 185195 DEBUG nova.virt.hardware [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.081 185195 DEBUG nova.virt.hardware [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.081 185195 DEBUG nova.virt.hardware [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.081 185195 DEBUG nova.virt.hardware [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.082 185195 DEBUG nova.virt.hardware [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.082 185195 DEBUG nova.virt.hardware [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.083 185195 DEBUG nova.virt.hardware [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.083 185195 DEBUG nova.virt.hardware [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.083 185195 DEBUG nova.objects.instance [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:38:21 compute-0 podman[251283]: 2026-01-27 15:38:21.088183093 +0000 UTC m=+0.050621868 container remove dfac8daa978187a910e84200062b41f9cf925ce0912928796db59329d0658a74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.095 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[19800063-2cf4-4158-8ee2-8d63d512d03c]: (4, ('Tue Jan 27 03:38:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1 (dfac8daa978187a910e84200062b41f9cf925ce0912928796db59329d0658a74)\ndfac8daa978187a910e84200062b41f9cf925ce0912928796db59329d0658a74\nTue Jan 27 03:38:21 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1 (dfac8daa978187a910e84200062b41f9cf925ce0912928796db59329d0658a74)\ndfac8daa978187a910e84200062b41f9cf925ce0912928796db59329d0658a74\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.097 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[21bdc4fa-a737-4abc-b205-e00ffb7f04e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.098 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48bde8d1-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:38:21 compute-0 kernel: tap48bde8d1-e0: left promiscuous mode
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.104 185195 DEBUG oslo_concurrency.processutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.117 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[32b9806a-431e-4653-9256-7e8f8d27c5da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.124 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.131 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[12378c18-c1b9-4368-abcf-5afbcdd98465]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.132 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[54ba7b11-f53c-4b78-be1e-cb842d9c92e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.145 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c0eaacc2-aa8e-4779-b4b7-70715e20c907]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578919, 'reachable_time': 38750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251298, 'error': None, 'target': 'ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d48bde8d1\x2de906\x2d4909\x2d996e\x2d97d5280dcfb1.mount: Deactivated successfully.
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.150 107308 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.150 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[d905ad49-680d-41ee-88fb-31961cb08c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.169 185195 DEBUG oslo_concurrency.processutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.config --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.171 185195 DEBUG oslo_concurrency.lockutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Acquiring lock "/var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.171 185195 DEBUG oslo_concurrency.lockutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "/var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.172 185195 DEBUG oslo_concurrency.lockutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "/var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.173 185195 DEBUG nova.virt.libvirt.vif [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:36:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1366686872',display_name='tempest-ServerActionsTestJSON-server-1366686872',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1366686872',id=9,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOOG6+xlzVgXR460fH4wCBDfSZ+Bqzod+T+TwINETdjxfNX82OuoN42CwFP5m4Wq/GmFxEISV/cN9fFUJXMVe/yMQysH0bvTBYF3s0nsMzz6e7cmVx9K1BA1d07EqkEl/g==',key_name='tempest-keypair-1606409313',keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:37:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='85bd0617549142039dbe55541a8fece5',ramdisk_id='',reservation_id='r-fqvroo3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1260809908',owner_user_name='tempest-ServerActionsTestJSON-1260809908-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T15:38:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37fdc28d88dc42689e835e91aad4c2d3',uuid=2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.173 185195 DEBUG nova.network.os_vif_util [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Converting VIF {"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.174 185195 DEBUG nova.network.os_vif_util [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:a8:c7,bridge_name='br-int',has_traffic_filtering=True,id=c4e14112-ad85-4d49-92a0-fa577e5760f3,network=Network(48bde8d1-e906-4909-996e-97d5280dcfb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e14112-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.176 185195 DEBUG nova.objects.instance [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.194 185195 DEBUG nova.virt.libvirt.driver [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] End _get_guest_xml xml=<domain type="kvm">
Jan 27 15:38:21 compute-0 nova_compute[185191]:   <uuid>2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a</uuid>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   <name>instance-00000009</name>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   <memory>131072</memory>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   <vcpu>1</vcpu>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   <metadata>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <nova:name>tempest-ServerActionsTestJSON-server-1366686872</nova:name>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <nova:creationTime>2026-01-27 15:38:21</nova:creationTime>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <nova:flavor name="m1.nano">
Jan 27 15:38:21 compute-0 nova_compute[185191]:         <nova:memory>128</nova:memory>
Jan 27 15:38:21 compute-0 nova_compute[185191]:         <nova:disk>1</nova:disk>
Jan 27 15:38:21 compute-0 nova_compute[185191]:         <nova:swap>0</nova:swap>
Jan 27 15:38:21 compute-0 nova_compute[185191]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 15:38:21 compute-0 nova_compute[185191]:         <nova:vcpus>1</nova:vcpus>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       </nova:flavor>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <nova:owner>
Jan 27 15:38:21 compute-0 nova_compute[185191]:         <nova:user uuid="37fdc28d88dc42689e835e91aad4c2d3">tempest-ServerActionsTestJSON-1260809908-project-member</nova:user>
Jan 27 15:38:21 compute-0 nova_compute[185191]:         <nova:project uuid="85bd0617549142039dbe55541a8fece5">tempest-ServerActionsTestJSON-1260809908</nova:project>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       </nova:owner>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <nova:root type="image" uuid="fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <nova:ports>
Jan 27 15:38:21 compute-0 nova_compute[185191]:         <nova:port uuid="c4e14112-ad85-4d49-92a0-fa577e5760f3">
Jan 27 15:38:21 compute-0 nova_compute[185191]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:         </nova:port>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       </nova:ports>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     </nova:instance>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   </metadata>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   <sysinfo type="smbios">
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <system>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <entry name="manufacturer">RDO</entry>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <entry name="product">OpenStack Compute</entry>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <entry name="serial">2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a</entry>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <entry name="uuid">2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a</entry>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <entry name="family">Virtual Machine</entry>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     </system>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   </sysinfo>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   <os>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <boot dev="hd"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <smbios mode="sysinfo"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   </os>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   <features>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <acpi/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <apic/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <vmcoreinfo/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   </features>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   <clock offset="utc">
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <timer name="hpet" present="no"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   </clock>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   <cpu mode="host-model" match="exact">
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <target dev="vda" bus="virtio"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <disk type="file" device="cdrom">
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.config"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <target dev="sda" bus="sata"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <interface type="ethernet">
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <mac address="fa:16:3e:a4:a8:c7"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <mtu size="1442"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <target dev="tapc4e14112-ad"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <serial type="pty">
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <log file="/var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/console.log" append="off"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     </serial>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <video>
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     </video>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <input type="tablet" bus="usb"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <input type="keyboard" bus="usb"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <rng model="virtio">
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <backend model="random">/dev/urandom</backend>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <controller type="usb" index="0"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     <memballoon model="virtio">
Jan 27 15:38:21 compute-0 nova_compute[185191]:       <stats period="10"/>
Jan 27 15:38:21 compute-0 nova_compute[185191]:     </memballoon>
Jan 27 15:38:21 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:38:21 compute-0 nova_compute[185191]: </domain>
Jan 27 15:38:21 compute-0 nova_compute[185191]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.199 185195 DEBUG oslo_concurrency.processutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.262 185195 DEBUG oslo_concurrency.processutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.264 185195 DEBUG oslo_concurrency.processutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.330 185195 DEBUG oslo_concurrency.processutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.332 185195 DEBUG nova.objects.instance [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.359 185195 DEBUG oslo_concurrency.processutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.415 185195 DEBUG oslo_concurrency.processutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.417 185195 DEBUG nova.virt.disk.api [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Checking if we can resize image /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.417 185195 DEBUG oslo_concurrency.processutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.480 185195 DEBUG oslo_concurrency.processutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.481 185195 DEBUG nova.virt.disk.api [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Cannot resize image /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.481 185195 DEBUG nova.objects.instance [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lazy-loading 'migration_context' on Instance uuid 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.558 185195 DEBUG nova.virt.libvirt.vif [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:36:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1366686872',display_name='tempest-ServerActionsTestJSON-server-1366686872',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1366686872',id=9,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOOG6+xlzVgXR460fH4wCBDfSZ+Bqzod+T+TwINETdjxfNX82OuoN42CwFP5m4Wq/GmFxEISV/cN9fFUJXMVe/yMQysH0bvTBYF3s0nsMzz6e7cmVx9K1BA1d07EqkEl/g==',key_name='tempest-keypair-1606409313',keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:37:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='85bd0617549142039dbe55541a8fece5',ramdisk_id='',reservation_id='r-fqvroo3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1260809908',owner_user_name='tempest-ServerActionsTestJSON-1260809908-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:38:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37fdc28d88dc42689e835e91aad4c2d3',uuid=2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.559 185195 DEBUG nova.network.os_vif_util [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Converting VIF {"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.561 185195 DEBUG nova.network.os_vif_util [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:a8:c7,bridge_name='br-int',has_traffic_filtering=True,id=c4e14112-ad85-4d49-92a0-fa577e5760f3,network=Network(48bde8d1-e906-4909-996e-97d5280dcfb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e14112-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.562 185195 DEBUG os_vif [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:a8:c7,bridge_name='br-int',has_traffic_filtering=True,id=c4e14112-ad85-4d49-92a0-fa577e5760f3,network=Network(48bde8d1-e906-4909-996e-97d5280dcfb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e14112-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.564 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.565 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.566 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.571 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.572 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4e14112-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.573 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc4e14112-ad, col_values=(('external_ids', {'iface-id': 'c4e14112-ad85-4d49-92a0-fa577e5760f3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:a8:c7', 'vm-uuid': '2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.576 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:21 compute-0 NetworkManager[56090]: <info>  [1769528301.5773] manager: (tapc4e14112-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.581 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.583 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.584 185195 INFO os_vif [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:a8:c7,bridge_name='br-int',has_traffic_filtering=True,id=c4e14112-ad85-4d49-92a0-fa577e5760f3,network=Network(48bde8d1-e906-4909-996e-97d5280dcfb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e14112-ad')
Jan 27 15:38:21 compute-0 kernel: tapc4e14112-ad: entered promiscuous mode
Jan 27 15:38:21 compute-0 NetworkManager[56090]: <info>  [1769528301.6675] manager: (tapc4e14112-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Jan 27 15:38:21 compute-0 systemd-udevd[251221]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.669 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:21 compute-0 ovn_controller[97541]: 2026-01-27T15:38:21Z|00127|binding|INFO|Claiming lport c4e14112-ad85-4d49-92a0-fa577e5760f3 for this chassis.
Jan 27 15:38:21 compute-0 ovn_controller[97541]: 2026-01-27T15:38:21Z|00128|binding|INFO|c4e14112-ad85-4d49-92a0-fa577e5760f3: Claiming fa:16:3e:a4:a8:c7 10.100.0.9
Jan 27 15:38:21 compute-0 NetworkManager[56090]: <info>  [1769528301.6805] device (tapc4e14112-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 15:38:21 compute-0 NetworkManager[56090]: <info>  [1769528301.6812] device (tapc4e14112-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 15:38:21 compute-0 ovn_controller[97541]: 2026-01-27T15:38:21Z|00129|binding|INFO|Setting lport c4e14112-ad85-4d49-92a0-fa577e5760f3 ovn-installed in OVS
Jan 27 15:38:21 compute-0 nova_compute[185191]: 2026-01-27 15:38:21.686 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:21 compute-0 systemd-machined[156506]: New machine qemu-11-instance-00000009.
Jan 27 15:38:21 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-00000009.
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.815 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:a8:c7 10.100.0.9'], port_security=['fa:16:3e:a4:a8:c7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48bde8d1-e906-4909-996e-97d5280dcfb1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '85bd0617549142039dbe55541a8fece5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82dd7f40-eb0d-42c8-9980-11f2bbab4495', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdd63418-506a-4397-9a84-8a1d6706b561, chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=c4e14112-ad85-4d49-92a0-fa577e5760f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.816 106793 INFO neutron.agent.ovn.metadata.agent [-] Port c4e14112-ad85-4d49-92a0-fa577e5760f3 in datapath 48bde8d1-e906-4909-996e-97d5280dcfb1 bound to our chassis
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.817 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48bde8d1-e906-4909-996e-97d5280dcfb1
Jan 27 15:38:21 compute-0 ovn_controller[97541]: 2026-01-27T15:38:21Z|00130|binding|INFO|Setting lport c4e14112-ad85-4d49-92a0-fa577e5760f3 up in Southbound
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.827 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[077d1578-5f3d-4501-85d6-be16e80f07c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.828 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48bde8d1-e1 in ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.830 238613 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48bde8d1-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.830 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b7b249-f356-4ab0-b570-d2cdc1abde4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.831 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a1aae356-c37c-437a-8f3e-d4877f25c26d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.842 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[22d46fcb-9fe9-4243-b8b7-437bf2a5f78b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.867 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[7aaac0fb-9fc3-4be4-9d6d-b0e08f7fc703]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.896 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[87b382da-79f7-497d-9fed-07e47e2baed7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:21 compute-0 NetworkManager[56090]: <info>  [1769528301.9035] manager: (tap48bde8d1-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.902 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4550a705-97ed-43a8-90fc-43f6c2f7d4da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.946 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[323b3379-c50e-43a3-9ab8-ea4233a97049]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.950 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[ed33469a-1b12-45e2-8827-1a43fccc3e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:21 compute-0 NetworkManager[56090]: <info>  [1769528301.9745] device (tap48bde8d1-e0): carrier: link connected
Jan 27 15:38:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:21.981 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1ea1ef-55d2-456f-bed9-4098ffbd9a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:22.003 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[57c7c34c-d126-497f-988d-6c33d876f146]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48bde8d1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:d0:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587770, 'reachable_time': 27444, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251359, 'error': None, 'target': 'ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:22.021 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d74d498d-8295-41a3-b54c-aee1f6440ab5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe97:d055'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587770, 'tstamp': 587770}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251360, 'error': None, 'target': 'ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:22.038 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[57be298d-463c-4455-ad67-c8d7e4916860]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48bde8d1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:d0:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587770, 'reachable_time': 27444, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251361, 'error': None, 'target': 'ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:22.069 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[64619903-dc82-450f-97af-7336fbe1732f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:22.128 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[00b6950b-1f0c-4976-a18f-157cbe396197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:22.130 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48bde8d1-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:22.131 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:22.131 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48bde8d1-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.133 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:22 compute-0 NetworkManager[56090]: <info>  [1769528302.1341] manager: (tap48bde8d1-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Jan 27 15:38:22 compute-0 kernel: tap48bde8d1-e0: entered promiscuous mode
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.139 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:22.139 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48bde8d1-e0, col_values=(('external_ids', {'iface-id': '6ae5c324-742b-43ab-97cd-a7094add5cfb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.141 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:22 compute-0 ovn_controller[97541]: 2026-01-27T15:38:22Z|00131|binding|INFO|Releasing lport 6ae5c324-742b-43ab-97cd-a7094add5cfb from this chassis (sb_readonly=0)
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.153 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:22.154 106793 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48bde8d1-e906-4909-996e-97d5280dcfb1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48bde8d1-e906-4909-996e-97d5280dcfb1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:22.156 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[29a3b40d-d5ea-4b8a-b11b-42aeaceddb1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:22.156 106793 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: global
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     log         /dev/log local0 debug
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     log-tag     haproxy-metadata-proxy-48bde8d1-e906-4909-996e-97d5280dcfb1
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     user        root
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     group       root
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     maxconn     1024
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     pidfile     /var/lib/neutron/external/pids/48bde8d1-e906-4909-996e-97d5280dcfb1.pid.haproxy
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     daemon
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: defaults
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     log global
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     mode http
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     option httplog
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     option dontlognull
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     option http-server-close
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     option forwardfor
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     retries                 3
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     timeout http-request    30s
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     timeout connect         30s
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     timeout client          32s
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     timeout server          32s
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     timeout http-keep-alive 30s
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: listen listener
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     bind 169.254.169.254:80
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:     http-request add-header X-OVN-Network-ID 48bde8d1-e906-4909-996e-97d5280dcfb1
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 15:38:22 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:22.159 106793 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1', 'env', 'PROCESS_TAG=haproxy-48bde8d1-e906-4909-996e-97d5280dcfb1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48bde8d1-e906-4909-996e-97d5280dcfb1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.330 185195 DEBUG nova.virt.libvirt.host [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Removed pending event for 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.331 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528302.3301249, 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.331 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] VM Resumed (Lifecycle Event)
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.333 185195 DEBUG nova.compute.manager [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.344 185195 INFO nova.virt.libvirt.driver [-] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Instance rebooted successfully.
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.344 185195 DEBUG nova.compute.manager [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.358 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.363 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.387 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.387 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528302.3330646, 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.388 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] VM Started (Lifecycle Event)
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.410 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.417 185195 DEBUG oslo_concurrency.lockutils [None req-247e39f2-4ce2-4998-8042-0694fbeac748 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:22 compute-0 nova_compute[185191]: 2026-01-27 15:38:22.419 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:38:22 compute-0 podman[251399]: 2026-01-27 15:38:22.578201341 +0000 UTC m=+0.060615826 container create 33cc1ea7e1fb5b79831d1199d0be0702204da631a6b21d8aa0d90a010535c2b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:38:22 compute-0 systemd[1]: Started libpod-conmon-33cc1ea7e1fb5b79831d1199d0be0702204da631a6b21d8aa0d90a010535c2b3.scope.
Jan 27 15:38:22 compute-0 podman[251399]: 2026-01-27 15:38:22.549788789 +0000 UTC m=+0.032203274 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 15:38:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52f49e1a12b6ce4d449ba33db0f3b5227dfa41c882ce4a8c10434048ad9a8b8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 15:38:22 compute-0 podman[251399]: 2026-01-27 15:38:22.690886663 +0000 UTC m=+0.173301178 container init 33cc1ea7e1fb5b79831d1199d0be0702204da631a6b21d8aa0d90a010535c2b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 27 15:38:22 compute-0 podman[251399]: 2026-01-27 15:38:22.699145195 +0000 UTC m=+0.181559690 container start 33cc1ea7e1fb5b79831d1199d0be0702204da631a6b21d8aa0d90a010535c2b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 15:38:22 compute-0 neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1[251414]: [NOTICE]   (251418) : New worker (251420) forked
Jan 27 15:38:22 compute-0 neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1[251414]: [NOTICE]   (251418) : Loading success.
Jan 27 15:38:23 compute-0 nova_compute[185191]: 2026-01-27 15:38:23.199 185195 DEBUG nova.compute.manager [req-4e538760-0a8b-4745-91a3-e1e6be609b7c req-c62262b2-a4c5-48f6-91ed-ce5c42023194 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received event network-vif-unplugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:38:23 compute-0 nova_compute[185191]: 2026-01-27 15:38:23.199 185195 DEBUG oslo_concurrency.lockutils [req-4e538760-0a8b-4745-91a3-e1e6be609b7c req-c62262b2-a4c5-48f6-91ed-ce5c42023194 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:23 compute-0 nova_compute[185191]: 2026-01-27 15:38:23.200 185195 DEBUG oslo_concurrency.lockutils [req-4e538760-0a8b-4745-91a3-e1e6be609b7c req-c62262b2-a4c5-48f6-91ed-ce5c42023194 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:23 compute-0 nova_compute[185191]: 2026-01-27 15:38:23.200 185195 DEBUG oslo_concurrency.lockutils [req-4e538760-0a8b-4745-91a3-e1e6be609b7c req-c62262b2-a4c5-48f6-91ed-ce5c42023194 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:23 compute-0 nova_compute[185191]: 2026-01-27 15:38:23.201 185195 DEBUG nova.compute.manager [req-4e538760-0a8b-4745-91a3-e1e6be609b7c req-c62262b2-a4c5-48f6-91ed-ce5c42023194 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] No waiting events found dispatching network-vif-unplugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:38:23 compute-0 nova_compute[185191]: 2026-01-27 15:38:23.201 185195 WARNING nova.compute.manager [req-4e538760-0a8b-4745-91a3-e1e6be609b7c req-c62262b2-a4c5-48f6-91ed-ce5c42023194 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received unexpected event network-vif-unplugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 for instance with vm_state active and task_state None.
Jan 27 15:38:24 compute-0 podman[251430]: 2026-01-27 15:38:24.337064848 +0000 UTC m=+0.076174614 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:38:24 compute-0 podman[251429]: 2026-01-27 15:38:24.340592443 +0000 UTC m=+0.087606951 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 27 15:38:25 compute-0 nova_compute[185191]: 2026-01-27 15:38:25.881 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:26 compute-0 nova_compute[185191]: 2026-01-27 15:38:26.014 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:26 compute-0 nova_compute[185191]: 2026-01-27 15:38:26.577 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:26 compute-0 nova_compute[185191]: 2026-01-27 15:38:26.974 185195 DEBUG nova.compute.manager [req-56a9f9a5-9202-49d6-ab7f-ac6118d8e399 req-04e650ad-7232-4c6c-828b-fd5a262025af 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received event network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:38:26 compute-0 nova_compute[185191]: 2026-01-27 15:38:26.974 185195 DEBUG oslo_concurrency.lockutils [req-56a9f9a5-9202-49d6-ab7f-ac6118d8e399 req-04e650ad-7232-4c6c-828b-fd5a262025af 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:26 compute-0 nova_compute[185191]: 2026-01-27 15:38:26.975 185195 DEBUG oslo_concurrency.lockutils [req-56a9f9a5-9202-49d6-ab7f-ac6118d8e399 req-04e650ad-7232-4c6c-828b-fd5a262025af 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:26 compute-0 nova_compute[185191]: 2026-01-27 15:38:26.975 185195 DEBUG oslo_concurrency.lockutils [req-56a9f9a5-9202-49d6-ab7f-ac6118d8e399 req-04e650ad-7232-4c6c-828b-fd5a262025af 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:26 compute-0 nova_compute[185191]: 2026-01-27 15:38:26.975 185195 DEBUG nova.compute.manager [req-56a9f9a5-9202-49d6-ab7f-ac6118d8e399 req-04e650ad-7232-4c6c-828b-fd5a262025af 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] No waiting events found dispatching network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:38:26 compute-0 nova_compute[185191]: 2026-01-27 15:38:26.976 185195 WARNING nova.compute.manager [req-56a9f9a5-9202-49d6-ab7f-ac6118d8e399 req-04e650ad-7232-4c6c-828b-fd5a262025af 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received unexpected event network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 for instance with vm_state active and task_state None.
Jan 27 15:38:26 compute-0 nova_compute[185191]: 2026-01-27 15:38:26.976 185195 DEBUG nova.compute.manager [req-56a9f9a5-9202-49d6-ab7f-ac6118d8e399 req-04e650ad-7232-4c6c-828b-fd5a262025af 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received event network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:38:26 compute-0 nova_compute[185191]: 2026-01-27 15:38:26.976 185195 DEBUG oslo_concurrency.lockutils [req-56a9f9a5-9202-49d6-ab7f-ac6118d8e399 req-04e650ad-7232-4c6c-828b-fd5a262025af 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:26 compute-0 nova_compute[185191]: 2026-01-27 15:38:26.977 185195 DEBUG oslo_concurrency.lockutils [req-56a9f9a5-9202-49d6-ab7f-ac6118d8e399 req-04e650ad-7232-4c6c-828b-fd5a262025af 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:26 compute-0 nova_compute[185191]: 2026-01-27 15:38:26.977 185195 DEBUG oslo_concurrency.lockutils [req-56a9f9a5-9202-49d6-ab7f-ac6118d8e399 req-04e650ad-7232-4c6c-828b-fd5a262025af 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:26 compute-0 nova_compute[185191]: 2026-01-27 15:38:26.977 185195 DEBUG nova.compute.manager [req-56a9f9a5-9202-49d6-ab7f-ac6118d8e399 req-04e650ad-7232-4c6c-828b-fd5a262025af 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] No waiting events found dispatching network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:38:26 compute-0 nova_compute[185191]: 2026-01-27 15:38:26.978 185195 WARNING nova.compute.manager [req-56a9f9a5-9202-49d6-ab7f-ac6118d8e399 req-04e650ad-7232-4c6c-828b-fd5a262025af 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received unexpected event network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 for instance with vm_state active and task_state None.
Jan 27 15:38:29 compute-0 nova_compute[185191]: 2026-01-27 15:38:29.262 185195 DEBUG nova.compute.manager [req-25d76e94-53b4-4c53-82a0-f9c445c42de5 req-27fe9332-6ccc-48be-80d2-b0ea6580aa17 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received event network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:38:29 compute-0 nova_compute[185191]: 2026-01-27 15:38:29.262 185195 DEBUG oslo_concurrency.lockutils [req-25d76e94-53b4-4c53-82a0-f9c445c42de5 req-27fe9332-6ccc-48be-80d2-b0ea6580aa17 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:29 compute-0 nova_compute[185191]: 2026-01-27 15:38:29.263 185195 DEBUG oslo_concurrency.lockutils [req-25d76e94-53b4-4c53-82a0-f9c445c42de5 req-27fe9332-6ccc-48be-80d2-b0ea6580aa17 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:29 compute-0 nova_compute[185191]: 2026-01-27 15:38:29.263 185195 DEBUG oslo_concurrency.lockutils [req-25d76e94-53b4-4c53-82a0-f9c445c42de5 req-27fe9332-6ccc-48be-80d2-b0ea6580aa17 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:29 compute-0 nova_compute[185191]: 2026-01-27 15:38:29.263 185195 DEBUG nova.compute.manager [req-25d76e94-53b4-4c53-82a0-f9c445c42de5 req-27fe9332-6ccc-48be-80d2-b0ea6580aa17 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] No waiting events found dispatching network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:38:29 compute-0 nova_compute[185191]: 2026-01-27 15:38:29.263 185195 WARNING nova.compute.manager [req-25d76e94-53b4-4c53-82a0-f9c445c42de5 req-27fe9332-6ccc-48be-80d2-b0ea6580aa17 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received unexpected event network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 for instance with vm_state active and task_state None.
Jan 27 15:38:29 compute-0 podman[251474]: 2026-01-27 15:38:29.305347192 +0000 UTC m=+0.061762967 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:38:29 compute-0 podman[201073]: time="2026-01-27T15:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:38:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:38:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.015 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.041 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "cb018734-6031-42f0-98a2-1cd3bfd95c69" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.042 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "cb018734-6031-42f0-98a2-1cd3bfd95c69" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.070 185195 DEBUG nova.compute.manager [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.193 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.196 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:31 compute-0 openstack_network_exporter[204239]: ERROR   15:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:38:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:38:31 compute-0 openstack_network_exporter[204239]: ERROR   15:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:38:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.465 185195 DEBUG nova.virt.hardware [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.466 185195 INFO nova.compute.claims [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Claim successful on node compute-0.ctlplane.example.com
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.579 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.683 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.687 185195 DEBUG nova.compute.provider_tree [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.732 185195 DEBUG nova.scheduler.client.report [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.777 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.778 185195 DEBUG nova.compute.manager [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.854 185195 DEBUG nova.compute.manager [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.856 185195 DEBUG nova.network.neutron [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.890 185195 INFO nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 15:38:31 compute-0 nova_compute[185191]: 2026-01-27 15:38:31.924 185195 DEBUG nova.compute.manager [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.109 185195 DEBUG nova.compute.manager [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.110 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.110 185195 INFO nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Creating image(s)
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.110 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "/var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.110 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "/var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.111 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "/var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.135 185195 DEBUG oslo_concurrency.processutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.198 185195 DEBUG oslo_concurrency.processutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.199 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.200 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.211 185195 DEBUG oslo_concurrency.processutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.267 185195 DEBUG oslo_concurrency.processutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.268 185195 DEBUG oslo_concurrency.processutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024,backing_fmt=raw /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.313 185195 DEBUG oslo_concurrency.processutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024,backing_fmt=raw /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.314 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.315 185195 DEBUG oslo_concurrency.processutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.373 185195 DEBUG oslo_concurrency.processutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.375 185195 DEBUG nova.virt.disk.api [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Checking if we can resize image /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.375 185195 DEBUG oslo_concurrency.processutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.434 185195 DEBUG oslo_concurrency.processutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.435 185195 DEBUG nova.virt.disk.api [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Cannot resize image /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.435 185195 DEBUG nova.objects.instance [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lazy-loading 'migration_context' on Instance uuid cb018734-6031-42f0-98a2-1cd3bfd95c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.460 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.460 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Ensure instance console log exists: /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.460 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.461 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.461 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:32 compute-0 nova_compute[185191]: 2026-01-27 15:38:32.562 185195 DEBUG nova.policy [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5debc8bd8b947ef8b11b0edb9d8624e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff135d375334408199a41eb5e406fa31', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 15:38:35 compute-0 nova_compute[185191]: 2026-01-27 15:38:35.702 185195 DEBUG nova.network.neutron [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Successfully created port: b3766198-88ae-43c4-8f5d-53661a568cde _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 15:38:35 compute-0 nova_compute[185191]: 2026-01-27 15:38:35.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:38:35 compute-0 nova_compute[185191]: 2026-01-27 15:38:35.981 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:35 compute-0 nova_compute[185191]: 2026-01-27 15:38:35.982 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:35 compute-0 nova_compute[185191]: 2026-01-27 15:38:35.982 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:35 compute-0 nova_compute[185191]: 2026-01-27 15:38:35.982 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:38:36 compute-0 nova_compute[185191]: 2026-01-27 15:38:36.017 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:36 compute-0 nova_compute[185191]: 2026-01-27 15:38:36.097 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:38:36 compute-0 nova_compute[185191]: 2026-01-27 15:38:36.161 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:38:36 compute-0 nova_compute[185191]: 2026-01-27 15:38:36.162 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:38:36 compute-0 nova_compute[185191]: 2026-01-27 15:38:36.248 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:38:36 compute-0 nova_compute[185191]: 2026-01-27 15:38:36.555 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:38:36 compute-0 nova_compute[185191]: 2026-01-27 15:38:36.556 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5176MB free_disk=72.34872436523438GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:38:36 compute-0 nova_compute[185191]: 2026-01-27 15:38:36.556 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:36 compute-0 nova_compute[185191]: 2026-01-27 15:38:36.557 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:36 compute-0 nova_compute[185191]: 2026-01-27 15:38:36.582 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:36 compute-0 nova_compute[185191]: 2026-01-27 15:38:36.812 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:38:36 compute-0 nova_compute[185191]: 2026-01-27 15:38:36.813 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance cb018734-6031-42f0-98a2-1cd3bfd95c69 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:38:36 compute-0 nova_compute[185191]: 2026-01-27 15:38:36.813 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:38:36 compute-0 nova_compute[185191]: 2026-01-27 15:38:36.818 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:38:36 compute-0 nova_compute[185191]: 2026-01-27 15:38:36.886 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:38:36 compute-0 nova_compute[185191]: 2026-01-27 15:38:36.923 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:38:37 compute-0 nova_compute[185191]: 2026-01-27 15:38:37.037 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:38:37 compute-0 nova_compute[185191]: 2026-01-27 15:38:37.038 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:37 compute-0 nova_compute[185191]: 2026-01-27 15:38:37.988 185195 DEBUG nova.network.neutron [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Successfully updated port: b3766198-88ae-43c4-8f5d-53661a568cde _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 15:38:38 compute-0 nova_compute[185191]: 2026-01-27 15:38:38.019 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "refresh_cache-cb018734-6031-42f0-98a2-1cd3bfd95c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:38:38 compute-0 nova_compute[185191]: 2026-01-27 15:38:38.019 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquired lock "refresh_cache-cb018734-6031-42f0-98a2-1cd3bfd95c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:38:38 compute-0 nova_compute[185191]: 2026-01-27 15:38:38.020 185195 DEBUG nova.network.neutron [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:38:38 compute-0 nova_compute[185191]: 2026-01-27 15:38:38.175 185195 DEBUG nova.compute.manager [req-57927187-d10f-4a95-8710-83a55c1141f0 req-99f1c2d8-7dd9-45e4-afe7-e6a39d976096 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Received event network-changed-b3766198-88ae-43c4-8f5d-53661a568cde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:38:38 compute-0 nova_compute[185191]: 2026-01-27 15:38:38.176 185195 DEBUG nova.compute.manager [req-57927187-d10f-4a95-8710-83a55c1141f0 req-99f1c2d8-7dd9-45e4-afe7-e6a39d976096 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Refreshing instance network info cache due to event network-changed-b3766198-88ae-43c4-8f5d-53661a568cde. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:38:38 compute-0 nova_compute[185191]: 2026-01-27 15:38:38.176 185195 DEBUG oslo_concurrency.lockutils [req-57927187-d10f-4a95-8710-83a55c1141f0 req-99f1c2d8-7dd9-45e4-afe7-e6a39d976096 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-cb018734-6031-42f0-98a2-1cd3bfd95c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:38:38 compute-0 nova_compute[185191]: 2026-01-27 15:38:38.272 185195 DEBUG nova.network.neutron [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.791 185195 DEBUG nova.network.neutron [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Updating instance_info_cache with network_info: [{"id": "b3766198-88ae-43c4-8f5d-53661a568cde", "address": "fa:16:3e:3a:8b:a2", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3766198-88", "ovs_interfaceid": "b3766198-88ae-43c4-8f5d-53661a568cde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.812 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Releasing lock "refresh_cache-cb018734-6031-42f0-98a2-1cd3bfd95c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.813 185195 DEBUG nova.compute.manager [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Instance network_info: |[{"id": "b3766198-88ae-43c4-8f5d-53661a568cde", "address": "fa:16:3e:3a:8b:a2", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3766198-88", "ovs_interfaceid": "b3766198-88ae-43c4-8f5d-53661a568cde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.814 185195 DEBUG oslo_concurrency.lockutils [req-57927187-d10f-4a95-8710-83a55c1141f0 req-99f1c2d8-7dd9-45e4-afe7-e6a39d976096 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-cb018734-6031-42f0-98a2-1cd3bfd95c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.814 185195 DEBUG nova.network.neutron [req-57927187-d10f-4a95-8710-83a55c1141f0 req-99f1c2d8-7dd9-45e4-afe7-e6a39d976096 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Refreshing network info cache for port b3766198-88ae-43c4-8f5d-53661a568cde _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.817 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Start _get_guest_xml network_info=[{"id": "b3766198-88ae-43c4-8f5d-53661a568cde", "address": "fa:16:3e:3a:8b:a2", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3766198-88", "ovs_interfaceid": "b3766198-88ae-43c4-8f5d-53661a568cde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:34:19Z,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:34:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': 'fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.823 185195 WARNING nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.831 185195 DEBUG nova.virt.libvirt.host [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.833 185195 DEBUG nova.virt.libvirt.host [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.849 185195 DEBUG nova.virt.libvirt.host [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.849 185195 DEBUG nova.virt.libvirt.host [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.850 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.850 185195 DEBUG nova.virt.hardware [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T15:34:18Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='aed09843-3292-40b2-b829-c4ed118e135f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:34:19Z,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:34:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.851 185195 DEBUG nova.virt.hardware [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.851 185195 DEBUG nova.virt.hardware [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.851 185195 DEBUG nova.virt.hardware [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.852 185195 DEBUG nova.virt.hardware [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.852 185195 DEBUG nova.virt.hardware [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.852 185195 DEBUG nova.virt.hardware [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.853 185195 DEBUG nova.virt.hardware [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.853 185195 DEBUG nova.virt.hardware [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.853 185195 DEBUG nova.virt.hardware [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.854 185195 DEBUG nova.virt.hardware [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.857 185195 DEBUG nova.virt.libvirt.vif [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:38:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1100779126',display_name='tempest-TestNetworkBasicOps-server-1100779126',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1100779126',id=11,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHf8da76ICP1FE4SxDbt3YLW/bs/58jyYG47+B9oCgXw3XIrB9hFCTLCXEqtUY3LzA0WMyYL5qCR/vJiWNNnwJ3t2/4Ht1zYjhMss6JgqFnNVdGGTHrJ9AkX90eos/vFVg==',key_name='tempest-TestNetworkBasicOps-1771239932',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff135d375334408199a41eb5e406fa31',ramdisk_id='',reservation_id='r-e1be0u24',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1734510166',owner_user_name='tempest-TestNetworkBasicOps-1734510166-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:38:31Z,user_data=None,user_id='a5debc8bd8b947ef8b11b0edb9d8624e',uuid=cb018734-6031-42f0-98a2-1cd3bfd95c69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3766198-88ae-43c4-8f5d-53661a568cde", "address": "fa:16:3e:3a:8b:a2", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3766198-88", "ovs_interfaceid": "b3766198-88ae-43c4-8f5d-53661a568cde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.857 185195 DEBUG nova.network.os_vif_util [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Converting VIF {"id": "b3766198-88ae-43c4-8f5d-53661a568cde", "address": "fa:16:3e:3a:8b:a2", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3766198-88", "ovs_interfaceid": "b3766198-88ae-43c4-8f5d-53661a568cde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.858 185195 DEBUG nova.network.os_vif_util [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=b3766198-88ae-43c4-8f5d-53661a568cde,network=Network(69348b1d-27dc-488f-b1c0-e5faaa154377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3766198-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.859 185195 DEBUG nova.objects.instance [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lazy-loading 'pci_devices' on Instance uuid cb018734-6031-42f0-98a2-1cd3bfd95c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.884 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] End _get_guest_xml xml=<domain type="kvm">
Jan 27 15:38:39 compute-0 nova_compute[185191]:   <uuid>cb018734-6031-42f0-98a2-1cd3bfd95c69</uuid>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   <name>instance-0000000b</name>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   <memory>131072</memory>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   <vcpu>1</vcpu>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   <metadata>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <nova:name>tempest-TestNetworkBasicOps-server-1100779126</nova:name>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <nova:creationTime>2026-01-27 15:38:39</nova:creationTime>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <nova:flavor name="m1.nano">
Jan 27 15:38:39 compute-0 nova_compute[185191]:         <nova:memory>128</nova:memory>
Jan 27 15:38:39 compute-0 nova_compute[185191]:         <nova:disk>1</nova:disk>
Jan 27 15:38:39 compute-0 nova_compute[185191]:         <nova:swap>0</nova:swap>
Jan 27 15:38:39 compute-0 nova_compute[185191]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 15:38:39 compute-0 nova_compute[185191]:         <nova:vcpus>1</nova:vcpus>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       </nova:flavor>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <nova:owner>
Jan 27 15:38:39 compute-0 nova_compute[185191]:         <nova:user uuid="a5debc8bd8b947ef8b11b0edb9d8624e">tempest-TestNetworkBasicOps-1734510166-project-member</nova:user>
Jan 27 15:38:39 compute-0 nova_compute[185191]:         <nova:project uuid="ff135d375334408199a41eb5e406fa31">tempest-TestNetworkBasicOps-1734510166</nova:project>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       </nova:owner>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <nova:root type="image" uuid="fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <nova:ports>
Jan 27 15:38:39 compute-0 nova_compute[185191]:         <nova:port uuid="b3766198-88ae-43c4-8f5d-53661a568cde">
Jan 27 15:38:39 compute-0 nova_compute[185191]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:         </nova:port>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       </nova:ports>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     </nova:instance>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   </metadata>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   <sysinfo type="smbios">
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <system>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <entry name="manufacturer">RDO</entry>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <entry name="product">OpenStack Compute</entry>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <entry name="serial">cb018734-6031-42f0-98a2-1cd3bfd95c69</entry>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <entry name="uuid">cb018734-6031-42f0-98a2-1cd3bfd95c69</entry>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <entry name="family">Virtual Machine</entry>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     </system>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   </sysinfo>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   <os>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <boot dev="hd"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <smbios mode="sysinfo"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   </os>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   <features>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <acpi/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <apic/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <vmcoreinfo/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   </features>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   <clock offset="utc">
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <timer name="hpet" present="no"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   </clock>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   <cpu mode="host-model" match="exact">
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <target dev="vda" bus="virtio"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <disk type="file" device="cdrom">
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.config"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <target dev="sda" bus="sata"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <interface type="ethernet">
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <mac address="fa:16:3e:3a:8b:a2"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <mtu size="1442"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <target dev="tapb3766198-88"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <serial type="pty">
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <log file="/var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/console.log" append="off"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     </serial>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <video>
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     </video>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <input type="tablet" bus="usb"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <rng model="virtio">
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <backend model="random">/dev/urandom</backend>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <controller type="usb" index="0"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     <memballoon model="virtio">
Jan 27 15:38:39 compute-0 nova_compute[185191]:       <stats period="10"/>
Jan 27 15:38:39 compute-0 nova_compute[185191]:     </memballoon>
Jan 27 15:38:39 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:38:39 compute-0 nova_compute[185191]: </domain>
Jan 27 15:38:39 compute-0 nova_compute[185191]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.885 185195 DEBUG nova.compute.manager [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Preparing to wait for external event network-vif-plugged-b3766198-88ae-43c4-8f5d-53661a568cde prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.885 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.886 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.886 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.887 185195 DEBUG nova.virt.libvirt.vif [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:38:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1100779126',display_name='tempest-TestNetworkBasicOps-server-1100779126',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1100779126',id=11,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHf8da76ICP1FE4SxDbt3YLW/bs/58jyYG47+B9oCgXw3XIrB9hFCTLCXEqtUY3LzA0WMyYL5qCR/vJiWNNnwJ3t2/4Ht1zYjhMss6JgqFnNVdGGTHrJ9AkX90eos/vFVg==',key_name='tempest-TestNetworkBasicOps-1771239932',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff135d375334408199a41eb5e406fa31',ramdisk_id='',reservation_id='r-e1be0u24',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1734510166',owner_user_name='tempest-TestNetworkBasicOps-1734510166-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:38:31Z,user_data=None,user_id='a5debc8bd8b947ef8b11b0edb9d8624e',uuid=cb018734-6031-42f0-98a2-1cd3bfd95c69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3766198-88ae-43c4-8f5d-53661a568cde", "address": "fa:16:3e:3a:8b:a2", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3766198-88", "ovs_interfaceid": "b3766198-88ae-43c4-8f5d-53661a568cde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.887 185195 DEBUG nova.network.os_vif_util [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Converting VIF {"id": "b3766198-88ae-43c4-8f5d-53661a568cde", "address": "fa:16:3e:3a:8b:a2", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3766198-88", "ovs_interfaceid": "b3766198-88ae-43c4-8f5d-53661a568cde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.888 185195 DEBUG nova.network.os_vif_util [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=b3766198-88ae-43c4-8f5d-53661a568cde,network=Network(69348b1d-27dc-488f-b1c0-e5faaa154377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3766198-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.888 185195 DEBUG os_vif [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=b3766198-88ae-43c4-8f5d-53661a568cde,network=Network(69348b1d-27dc-488f-b1c0-e5faaa154377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3766198-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.889 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.889 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.890 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.892 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.893 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3766198-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.893 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3766198-88, col_values=(('external_ids', {'iface-id': 'b3766198-88ae-43c4-8f5d-53661a568cde', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:8b:a2', 'vm-uuid': 'cb018734-6031-42f0-98a2-1cd3bfd95c69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.895 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:39 compute-0 NetworkManager[56090]: <info>  [1769528319.8967] manager: (tapb3766198-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.898 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.904 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.905 185195 INFO os_vif [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=b3766198-88ae-43c4-8f5d-53661a568cde,network=Network(69348b1d-27dc-488f-b1c0-e5faaa154377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3766198-88')
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.987 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.988 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.989 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] No VIF found with MAC fa:16:3e:3a:8b:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 15:38:39 compute-0 nova_compute[185191]: 2026-01-27 15:38:39.989 185195 INFO nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Using config drive
Jan 27 15:38:40 compute-0 ovn_controller[97541]: 2026-01-27T15:38:40Z|00132|binding|INFO|Releasing lport 6ae5c324-742b-43ab-97cd-a7094add5cfb from this chassis (sb_readonly=0)
Jan 27 15:38:40 compute-0 nova_compute[185191]: 2026-01-27 15:38:40.391 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:40 compute-0 nova_compute[185191]: 2026-01-27 15:38:40.649 185195 INFO nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Creating config drive at /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.config
Jan 27 15:38:40 compute-0 nova_compute[185191]: 2026-01-27 15:38:40.656 185195 DEBUG oslo_concurrency.processutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvvq9fl8m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:38:40 compute-0 nova_compute[185191]: 2026-01-27 15:38:40.782 185195 DEBUG oslo_concurrency.processutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvvq9fl8m" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:38:40 compute-0 kernel: tapb3766198-88: entered promiscuous mode
Jan 27 15:38:40 compute-0 NetworkManager[56090]: <info>  [1769528320.8533] manager: (tapb3766198-88): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Jan 27 15:38:40 compute-0 nova_compute[185191]: 2026-01-27 15:38:40.856 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:40 compute-0 ovn_controller[97541]: 2026-01-27T15:38:40Z|00133|binding|INFO|Claiming lport b3766198-88ae-43c4-8f5d-53661a568cde for this chassis.
Jan 27 15:38:40 compute-0 ovn_controller[97541]: 2026-01-27T15:38:40Z|00134|binding|INFO|b3766198-88ae-43c4-8f5d-53661a568cde: Claiming fa:16:3e:3a:8b:a2 10.100.0.6
Jan 27 15:38:40 compute-0 nova_compute[185191]: 2026-01-27 15:38:40.864 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:40 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:40.873 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:8b:a2 10.100.0.6'], port_security=['fa:16:3e:3a:8b:a2 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'cb018734-6031-42f0-98a2-1cd3bfd95c69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69348b1d-27dc-488f-b1c0-e5faaa154377', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff135d375334408199a41eb5e406fa31', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a7eea46a-2779-4c19-92df-561b56dcec78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03063d12-1719-4bc3-90aa-20f60e1e1459, chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=b3766198-88ae-43c4-8f5d-53661a568cde) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:38:40 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:40.875 106793 INFO neutron.agent.ovn.metadata.agent [-] Port b3766198-88ae-43c4-8f5d-53661a568cde in datapath 69348b1d-27dc-488f-b1c0-e5faaa154377 bound to our chassis
Jan 27 15:38:40 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:40.878 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69348b1d-27dc-488f-b1c0-e5faaa154377
Jan 27 15:38:40 compute-0 ovn_controller[97541]: 2026-01-27T15:38:40Z|00135|binding|INFO|Setting lport b3766198-88ae-43c4-8f5d-53661a568cde ovn-installed in OVS
Jan 27 15:38:40 compute-0 ovn_controller[97541]: 2026-01-27T15:38:40Z|00136|binding|INFO|Setting lport b3766198-88ae-43c4-8f5d-53661a568cde up in Southbound
Jan 27 15:38:40 compute-0 nova_compute[185191]: 2026-01-27 15:38:40.885 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:40 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:40.891 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[73392416-9895-4372-9688-c5cfac7cfaef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:40 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:40.892 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap69348b1d-21 in ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 15:38:40 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:40.894 238613 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap69348b1d-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 15:38:40 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:40.894 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[51ca20c7-3920-4193-a7db-4303c0b0132c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:40 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:40.896 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8acfac5a-1ba9-4961-a992-2d21ffde02e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:40 compute-0 systemd-machined[156506]: New machine qemu-12-instance-0000000b.
Jan 27 15:38:40 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:40.907 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[4e79bc5b-a19d-469c-b797-c3a0872fee8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:40 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000b.
Jan 27 15:38:40 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:40.933 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e614f3c4-c0a6-492a-81d7-fa191610e373]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:40 compute-0 systemd-udevd[251555]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:38:40 compute-0 NetworkManager[56090]: <info>  [1769528320.9553] device (tapb3766198-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 15:38:40 compute-0 NetworkManager[56090]: <info>  [1769528320.9558] device (tapb3766198-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 15:38:40 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:40.980 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[34b6918b-751b-47c6-879f-7e448e2ae8ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:40 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:40.986 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb684ec-220c-447c-b303-7084257b9423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:40 compute-0 systemd-udevd[251563]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:38:40 compute-0 NetworkManager[56090]: <info>  [1769528320.9870] manager: (tap69348b1d-20): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Jan 27 15:38:40 compute-0 podman[251529]: 2026-01-27 15:38:40.988824836 +0000 UTC m=+0.127105089 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.019 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:41.019 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[71c9b1d2-218d-445e-8399-3ad58ed308f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:41.023 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[b485e923-395d-473b-b2d6-198bbaa5984d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:41 compute-0 NetworkManager[56090]: <info>  [1769528321.0448] device (tap69348b1d-20): carrier: link connected
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:41.048 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[36bc3dac-37de-436f-a7a2-b5ab9470e316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:41.063 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2dc347-64f5-4863-940c-58c52e3c2612]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69348b1d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:98:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589677, 'reachable_time': 39322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251587, 'error': None, 'target': 'ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:41.080 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9d72b7-cdfb-4e40-9e89-6424f33c584a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:98d6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589677, 'tstamp': 589677}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251588, 'error': None, 'target': 'ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:41.098 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5f925c37-46f1-4671-9c2a-f426dccc199b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69348b1d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:98:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589677, 'reachable_time': 39322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251589, 'error': None, 'target': 'ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:41.130 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f16914c8-a3c6-438e-8b0a-fbaadef997db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:41.181 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[30ff8945-632b-414d-8390-90b4ab00b6a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:41.183 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69348b1d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:41.183 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:41.183 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69348b1d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.185 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:41 compute-0 NetworkManager[56090]: <info>  [1769528321.1870] manager: (tap69348b1d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Jan 27 15:38:41 compute-0 kernel: tap69348b1d-20: entered promiscuous mode
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.189 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:41.197 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69348b1d-20, col_values=(('external_ids', {'iface-id': '7867416a-c1b6-4934-a0ce-b1255fa030c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.198 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:41 compute-0 ovn_controller[97541]: 2026-01-27T15:38:41Z|00137|binding|INFO|Releasing lport 7867416a-c1b6-4934-a0ce-b1255fa030c3 from this chassis (sb_readonly=0)
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.199 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:41.203 106793 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/69348b1d-27dc-488f-b1c0-e5faaa154377.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/69348b1d-27dc-488f-b1c0-e5faaa154377.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:41.204 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[47320c6e-c838-451e-b7ca-77cb6a839250]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:41.205 106793 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: global
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     log         /dev/log local0 debug
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     log-tag     haproxy-metadata-proxy-69348b1d-27dc-488f-b1c0-e5faaa154377
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     user        root
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     group       root
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     maxconn     1024
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     pidfile     /var/lib/neutron/external/pids/69348b1d-27dc-488f-b1c0-e5faaa154377.pid.haproxy
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     daemon
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: defaults
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     log global
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     mode http
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     option httplog
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     option dontlognull
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     option http-server-close
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     option forwardfor
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     retries                 3
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     timeout http-request    30s
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     timeout connect         30s
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     timeout client          32s
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     timeout server          32s
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     timeout http-keep-alive 30s
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: listen listener
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     bind 169.254.169.254:80
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:     http-request add-header X-OVN-Network-ID 69348b1d-27dc-488f-b1c0-e5faaa154377
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 15:38:41 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:38:41.206 106793 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377', 'env', 'PROCESS_TAG=haproxy-69348b1d-27dc-488f-b1c0-e5faaa154377', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/69348b1d-27dc-488f-b1c0-e5faaa154377.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.218 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:41 compute-0 podman[251625]: 2026-01-27 15:38:41.620882525 +0000 UTC m=+0.074756556 container create 595a663e79091eb43f4cb19334467281ce38df45c2daec879401c9a9fb56eba2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.639 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528321.638961, cb018734-6031-42f0-98a2-1cd3bfd95c69 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.640 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] VM Started (Lifecycle Event)
Jan 27 15:38:41 compute-0 podman[251625]: 2026-01-27 15:38:41.579387172 +0000 UTC m=+0.033261243 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.682 185195 DEBUG nova.network.neutron [req-57927187-d10f-4a95-8710-83a55c1141f0 req-99f1c2d8-7dd9-45e4-afe7-e6a39d976096 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Updated VIF entry in instance network info cache for port b3766198-88ae-43c4-8f5d-53661a568cde. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.683 185195 DEBUG nova.network.neutron [req-57927187-d10f-4a95-8710-83a55c1141f0 req-99f1c2d8-7dd9-45e4-afe7-e6a39d976096 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Updating instance_info_cache with network_info: [{"id": "b3766198-88ae-43c4-8f5d-53661a568cde", "address": "fa:16:3e:3a:8b:a2", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3766198-88", "ovs_interfaceid": "b3766198-88ae-43c4-8f5d-53661a568cde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:38:41 compute-0 systemd[1]: Started libpod-conmon-595a663e79091eb43f4cb19334467281ce38df45c2daec879401c9a9fb56eba2.scope.
Jan 27 15:38:41 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f002dd14444ced53e62ed905be355ea0173045f7272471e0ab876231ca258de9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 15:38:41 compute-0 podman[251625]: 2026-01-27 15:38:41.767301022 +0000 UTC m=+0.221175093 container init 595a663e79091eb43f4cb19334467281ce38df45c2daec879401c9a9fb56eba2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.768 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:38:41 compute-0 podman[251625]: 2026-01-27 15:38:41.777034293 +0000 UTC m=+0.230908324 container start 595a663e79091eb43f4cb19334467281ce38df45c2daec879401c9a9fb56eba2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.778 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528321.6423895, cb018734-6031-42f0-98a2-1cd3bfd95c69 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.779 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] VM Paused (Lifecycle Event)
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.783 185195 DEBUG oslo_concurrency.lockutils [req-57927187-d10f-4a95-8710-83a55c1141f0 req-99f1c2d8-7dd9-45e4-afe7-e6a39d976096 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-cb018734-6031-42f0-98a2-1cd3bfd95c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:38:41 compute-0 neutron-haproxy-ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377[251643]: [NOTICE]   (251647) : New worker (251649) forked
Jan 27 15:38:41 compute-0 neutron-haproxy-ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377[251643]: [NOTICE]   (251647) : Loading success.
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.838 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.847 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:38:41 compute-0 nova_compute[185191]: 2026-01-27 15:38:41.877 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:38:43 compute-0 podman[251660]: 2026-01-27 15:38:43.325895219 +0000 UTC m=+0.067208624 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Jan 27 15:38:43 compute-0 podman[251659]: 2026-01-27 15:38:43.358811002 +0000 UTC m=+0.104476803 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:38:43 compute-0 podman[251658]: 2026-01-27 15:38:43.374736899 +0000 UTC m=+0.110838564 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 27 15:38:44 compute-0 nova_compute[185191]: 2026-01-27 15:38:44.897 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:45 compute-0 nova_compute[185191]: 2026-01-27 15:38:45.038 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.022 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.739 185195 DEBUG nova.compute.manager [req-6d12c81a-de1a-4196-bb95-cec707857d85 req-8f8ae296-85fd-4288-9f59-e9d47a700d95 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Received event network-vif-plugged-b3766198-88ae-43c4-8f5d-53661a568cde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.740 185195 DEBUG oslo_concurrency.lockutils [req-6d12c81a-de1a-4196-bb95-cec707857d85 req-8f8ae296-85fd-4288-9f59-e9d47a700d95 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.740 185195 DEBUG oslo_concurrency.lockutils [req-6d12c81a-de1a-4196-bb95-cec707857d85 req-8f8ae296-85fd-4288-9f59-e9d47a700d95 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.741 185195 DEBUG oslo_concurrency.lockutils [req-6d12c81a-de1a-4196-bb95-cec707857d85 req-8f8ae296-85fd-4288-9f59-e9d47a700d95 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.741 185195 DEBUG nova.compute.manager [req-6d12c81a-de1a-4196-bb95-cec707857d85 req-8f8ae296-85fd-4288-9f59-e9d47a700d95 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Processing event network-vif-plugged-b3766198-88ae-43c4-8f5d-53661a568cde _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.742 185195 DEBUG nova.compute.manager [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.747 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528326.746958, cb018734-6031-42f0-98a2-1cd3bfd95c69 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.748 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] VM Resumed (Lifecycle Event)
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.749 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.755 185195 INFO nova.virt.libvirt.driver [-] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Instance spawned successfully.
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.756 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.783 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.791 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.795 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.795 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.796 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.796 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.797 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.797 185195 DEBUG nova.virt.libvirt.driver [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.834 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.872 185195 INFO nova.compute.manager [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Took 14.76 seconds to spawn the instance on the hypervisor.
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.873 185195 DEBUG nova.compute.manager [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.947 185195 INFO nova.compute.manager [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Took 15.79 seconds to build instance.
Jan 27 15:38:46 compute-0 nova_compute[185191]: 2026-01-27 15:38:46.966 185195 DEBUG oslo_concurrency.lockutils [None req-6a7fac1a-1ced-452b-aeb1-c8666cd80b71 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "cb018734-6031-42f0-98a2-1cd3bfd95c69" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:47 compute-0 nova_compute[185191]: 2026-01-27 15:38:47.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:38:48 compute-0 nova_compute[185191]: 2026-01-27 15:38:48.940 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:38:49 compute-0 nova_compute[185191]: 2026-01-27 15:38:49.079 185195 DEBUG nova.compute.manager [req-6a17b917-9aae-46a3-a3cc-0024641d87c3 req-91ff04de-1669-4d23-a5d9-79d839cda7d3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Received event network-vif-plugged-b3766198-88ae-43c4-8f5d-53661a568cde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:38:49 compute-0 nova_compute[185191]: 2026-01-27 15:38:49.081 185195 DEBUG oslo_concurrency.lockutils [req-6a17b917-9aae-46a3-a3cc-0024641d87c3 req-91ff04de-1669-4d23-a5d9-79d839cda7d3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:38:49 compute-0 nova_compute[185191]: 2026-01-27 15:38:49.081 185195 DEBUG oslo_concurrency.lockutils [req-6a17b917-9aae-46a3-a3cc-0024641d87c3 req-91ff04de-1669-4d23-a5d9-79d839cda7d3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:38:49 compute-0 nova_compute[185191]: 2026-01-27 15:38:49.082 185195 DEBUG oslo_concurrency.lockutils [req-6a17b917-9aae-46a3-a3cc-0024641d87c3 req-91ff04de-1669-4d23-a5d9-79d839cda7d3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:38:49 compute-0 nova_compute[185191]: 2026-01-27 15:38:49.083 185195 DEBUG nova.compute.manager [req-6a17b917-9aae-46a3-a3cc-0024641d87c3 req-91ff04de-1669-4d23-a5d9-79d839cda7d3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] No waiting events found dispatching network-vif-plugged-b3766198-88ae-43c4-8f5d-53661a568cde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:38:49 compute-0 nova_compute[185191]: 2026-01-27 15:38:49.083 185195 WARNING nova.compute.manager [req-6a17b917-9aae-46a3-a3cc-0024641d87c3 req-91ff04de-1669-4d23-a5d9-79d839cda7d3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Received unexpected event network-vif-plugged-b3766198-88ae-43c4-8f5d-53661a568cde for instance with vm_state active and task_state None.
Jan 27 15:38:49 compute-0 nova_compute[185191]: 2026-01-27 15:38:49.899 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:50 compute-0 nova_compute[185191]: 2026-01-27 15:38:50.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:38:50 compute-0 nova_compute[185191]: 2026-01-27 15:38:50.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:38:51 compute-0 nova_compute[185191]: 2026-01-27 15:38:51.024 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:51 compute-0 podman[251721]: 2026-01-27 15:38:51.350150754 +0000 UTC m=+0.109225070 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 15:38:51 compute-0 nova_compute[185191]: 2026-01-27 15:38:51.972 185195 DEBUG nova.compute.manager [req-5d2ceaed-fea8-4d99-bd7f-54079681dc59 req-1724a078-2259-4054-b1f5-8adc0e38a947 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Received event network-changed-b3766198-88ae-43c4-8f5d-53661a568cde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:38:51 compute-0 nova_compute[185191]: 2026-01-27 15:38:51.973 185195 DEBUG nova.compute.manager [req-5d2ceaed-fea8-4d99-bd7f-54079681dc59 req-1724a078-2259-4054-b1f5-8adc0e38a947 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Refreshing instance network info cache due to event network-changed-b3766198-88ae-43c4-8f5d-53661a568cde. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:38:51 compute-0 nova_compute[185191]: 2026-01-27 15:38:51.974 185195 DEBUG oslo_concurrency.lockutils [req-5d2ceaed-fea8-4d99-bd7f-54079681dc59 req-1724a078-2259-4054-b1f5-8adc0e38a947 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-cb018734-6031-42f0-98a2-1cd3bfd95c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:38:51 compute-0 nova_compute[185191]: 2026-01-27 15:38:51.974 185195 DEBUG oslo_concurrency.lockutils [req-5d2ceaed-fea8-4d99-bd7f-54079681dc59 req-1724a078-2259-4054-b1f5-8adc0e38a947 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-cb018734-6031-42f0-98a2-1cd3bfd95c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:38:51 compute-0 nova_compute[185191]: 2026-01-27 15:38:51.974 185195 DEBUG nova.network.neutron [req-5d2ceaed-fea8-4d99-bd7f-54079681dc59 req-1724a078-2259-4054-b1f5-8adc0e38a947 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Refreshing network info cache for port b3766198-88ae-43c4-8f5d-53661a568cde _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:38:52 compute-0 nova_compute[185191]: 2026-01-27 15:38:52.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:38:52 compute-0 nova_compute[185191]: 2026-01-27 15:38:52.946 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:38:52 compute-0 nova_compute[185191]: 2026-01-27 15:38:52.947 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:38:53 compute-0 nova_compute[185191]: 2026-01-27 15:38:53.759 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:38:53 compute-0 nova_compute[185191]: 2026-01-27 15:38:53.760 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:38:53 compute-0 nova_compute[185191]: 2026-01-27 15:38:53.760 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:38:53 compute-0 nova_compute[185191]: 2026-01-27 15:38:53.760 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:38:53 compute-0 nova_compute[185191]: 2026-01-27 15:38:53.995 185195 DEBUG nova.network.neutron [req-5d2ceaed-fea8-4d99-bd7f-54079681dc59 req-1724a078-2259-4054-b1f5-8adc0e38a947 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Updated VIF entry in instance network info cache for port b3766198-88ae-43c4-8f5d-53661a568cde. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:38:53 compute-0 nova_compute[185191]: 2026-01-27 15:38:53.995 185195 DEBUG nova.network.neutron [req-5d2ceaed-fea8-4d99-bd7f-54079681dc59 req-1724a078-2259-4054-b1f5-8adc0e38a947 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Updating instance_info_cache with network_info: [{"id": "b3766198-88ae-43c4-8f5d-53661a568cde", "address": "fa:16:3e:3a:8b:a2", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3766198-88", "ovs_interfaceid": "b3766198-88ae-43c4-8f5d-53661a568cde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:38:54 compute-0 nova_compute[185191]: 2026-01-27 15:38:54.080 185195 DEBUG oslo_concurrency.lockutils [req-5d2ceaed-fea8-4d99-bd7f-54079681dc59 req-1724a078-2259-4054-b1f5-8adc0e38a947 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-cb018734-6031-42f0-98a2-1cd3bfd95c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:38:54 compute-0 nova_compute[185191]: 2026-01-27 15:38:54.903 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:55 compute-0 podman[251743]: 2026-01-27 15:38:55.306283636 +0000 UTC m=+0.058070229 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:38:55 compute-0 podman[251742]: 2026-01-27 15:38:55.314357662 +0000 UTC m=+0.069909886 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_id=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, managed_by=edpm_ansible, version=9.4, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, vcs-type=git, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 27 15:38:56 compute-0 nova_compute[185191]: 2026-01-27 15:38:56.026 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:56 compute-0 nova_compute[185191]: 2026-01-27 15:38:56.711 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Updating instance_info_cache with network_info: [{"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:38:56 compute-0 nova_compute[185191]: 2026-01-27 15:38:56.740 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:38:56 compute-0 nova_compute[185191]: 2026-01-27 15:38:56.741 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:38:56 compute-0 nova_compute[185191]: 2026-01-27 15:38:56.742 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:38:57 compute-0 ovn_controller[97541]: 2026-01-27T15:38:57Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:a8:c7 10.100.0.9
Jan 27 15:38:59 compute-0 podman[201073]: time="2026-01-27T15:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:38:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 27 15:38:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4846 "" "Go-http-client/1.1"
Jan 27 15:38:59 compute-0 nova_compute[185191]: 2026-01-27 15:38:59.905 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:38:59 compute-0 sshd-session[251789]: Invalid user sol from 2.57.122.238 port 36476
Jan 27 15:39:00 compute-0 podman[251793]: 2026-01-27 15:39:00.033902245 +0000 UTC m=+0.067575103 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:39:00 compute-0 sshd-session[251789]: Connection closed by invalid user sol 2.57.122.238 port 36476 [preauth]
Jan 27 15:39:00 compute-0 sshd-session[251791]: Invalid user ubuntu from 45.148.10.240 port 49710
Jan 27 15:39:00 compute-0 sshd-session[251791]: Connection closed by invalid user ubuntu 45.148.10.240 port 49710 [preauth]
Jan 27 15:39:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:00.259 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:39:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:00.260 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:39:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:00.261 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:39:00 compute-0 nova_compute[185191]: 2026-01-27 15:39:00.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:39:01 compute-0 nova_compute[185191]: 2026-01-27 15:39:01.028 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:01 compute-0 openstack_network_exporter[204239]: ERROR   15:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:39:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:39:01 compute-0 openstack_network_exporter[204239]: ERROR   15:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:39:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:39:04 compute-0 nova_compute[185191]: 2026-01-27 15:39:04.909 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:06 compute-0 nova_compute[185191]: 2026-01-27 15:39:06.031 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:08 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:08.995 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:39:08 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:08.996 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:39:08 compute-0 nova_compute[185191]: 2026-01-27 15:39:08.998 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:09 compute-0 nova_compute[185191]: 2026-01-27 15:39:09.912 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:10.992 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:39:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:10.993 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:39:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:10.993 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:10.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc9849b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:39:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:11.000 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a', 'name': 'tempest-ServerActionsTestJSON-server-1366686872', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '85bd0617549142039dbe55541a8fece5', 'user_id': '37fdc28d88dc42689e835e91aad4c2d3', 'hostId': '1f7300eb44e10075c6cc0cb140aad0f7d6c6c299bdbe0bd07ddb3879', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:39:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:11.007 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance cb018734-6031-42f0-98a2-1cd3bfd95c69 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 15:39:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:11.007 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/cb018734-6031-42f0-98a2-1cd3bfd95c69 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82c957adbc17ae7d91b95e243ef95edcae050b803dbf40e883e7549d3d32b40a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 15:39:11 compute-0 nova_compute[185191]: 2026-01-27 15:39:11.032 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:11 compute-0 podman[251818]: 2026-01-27 15:39:11.352912925 +0000 UTC m=+0.096650253 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.040 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1976 Content-Type: application/json Date: Tue, 27 Jan 2026 15:39:11 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-56cde16c-2708-4532-a2d7-3aa4228ec360 x-openstack-request-id: req-56cde16c-2708-4532-a2d7-3aa4228ec360 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.041 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "cb018734-6031-42f0-98a2-1cd3bfd95c69", "name": "tempest-TestNetworkBasicOps-server-1100779126", "status": "ACTIVE", "tenant_id": "ff135d375334408199a41eb5e406fa31", "user_id": "a5debc8bd8b947ef8b11b0edb9d8624e", "metadata": {}, "hostId": "1cd2fabfd4dd45fb18d60f7789e76176ed83349e52dca65e859ed97b", "image": {"id": "fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87"}]}, "flavor": {"id": "aed09843-3292-40b2-b829-c4ed118e135f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/aed09843-3292-40b2-b829-c4ed118e135f"}]}, "created": "2026-01-27T15:38:29Z", "updated": "2026-01-27T15:38:46Z", "addresses": {"tempest-network-smoke--1655557402": [{"version": 4, "addr": "10.100.0.6", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3a:8b:a2"}, {"version": 4, "addr": "192.168.122.204", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3a:8b:a2"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/cb018734-6031-42f0-98a2-1cd3bfd95c69"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/cb018734-6031-42f0-98a2-1cd3bfd95c69"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-1771239932", "OS-SRV-USG:launched_at": "2026-01-27T15:38:46.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1021760231"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.041 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/cb018734-6031-42f0-98a2-1cd3bfd95c69 used request id req-56cde16c-2708-4532-a2d7-3aa4228ec360 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.042 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cb018734-6031-42f0-98a2-1cd3bfd95c69', 'name': 'tempest-TestNetworkBasicOps-server-1100779126', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ff135d375334408199a41eb5e406fa31', 'user_id': 'a5debc8bd8b947ef8b11b0edb9d8624e', 'hostId': '1cd2fabfd4dd45fb18d60f7789e76176ed83349e52dca65e859ed97b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.043 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.043 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.043 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.043 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.044 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:39:12.043460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.079 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.write.latency volume: 92955346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.080 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.149 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.149 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.150 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.150 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.151 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.151 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.151 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.151 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.152 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.write.requests volume: 35 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.152 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:39:12.151545) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.152 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.153 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.153 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.154 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.154 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.154 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.155 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.155 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.155 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.156 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:39:12.155559) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.169 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.169 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.200 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.201 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.202 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.203 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.203 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.203 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.204 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:39:12.204088) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.208 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.212 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for cb018734-6031-42f0-98a2-1cd3bfd95c69 / tapb3766198-88 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.213 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.214 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.214 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.214 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.214 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.214 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.215 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.216 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.216 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:39:12.215183) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.216 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.216 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.217 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.217 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.218 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.219 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:39:12.218089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.220 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.220 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.220 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.220 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.220 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.221 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.221 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.222 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:39:12.221042) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.222 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.223 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.223 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.223 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.223 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.223 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.224 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:39:12.223867) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.244 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/cpu volume: 33920000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.270 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/cpu volume: 25130000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.271 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.271 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.272 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.272 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.272 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.273 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.273 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:39:12.272512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.274 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.274 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.274 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.275 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.275 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.275 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.275 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.outgoing.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.276 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.276 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.277 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:39:12.275467) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.277 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.278 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.278 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.278 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.278 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:39:12.278376) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.279 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/memory.usage volume: 41.94140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.279 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.279 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance cb018734-6031-42f0-98a2-1cd3bfd95c69: ceilometer.compute.pollsters.NoVolumeException
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.280 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.280 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.280 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.280 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.281 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.281 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.incoming.bytes volume: 1431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.282 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.282 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.282 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:39:12.281337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.283 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.283 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.283 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.284 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T15:39:12.283966) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.284 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.284 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.284 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-1100779126>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-1100779126>]
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.285 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.285 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.285 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.286 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.286 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:39:12.286130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.286 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.287 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.287 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.288 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.288 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.289 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.289 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.290 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.incoming.bytes.delta volume: 1341 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:39:12.289633) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.291 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.292 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.292 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.293 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.293 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.294 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.294 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:39:12.293685) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.294 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.outgoing.bytes volume: 1278 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.295 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.295 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.296 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.296 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.297 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:39:12.297145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.297 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.298 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.299 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.299 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.300 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.300 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.300 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.301 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.302 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:39:12.301179) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.303 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.304 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.304 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.304 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.read.bytes volume: 32036864 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.305 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.305 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.306 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:39:12.304277) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.307 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.307 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.307 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.308 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.308 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.outgoing.bytes.delta volume: 1278 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.309 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.309 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.310 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:39:12.308025) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.310 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.310 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.311 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:39:12.311119) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.312 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.read.latency volume: 1179451708 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.312 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.read.latency volume: 93174611 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.313 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.read.latency volume: 784136647 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.313 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.read.latency volume: 940955 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.314 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.314 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.314 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.315 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.315 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T15:39:12.314881) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.316 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.316 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-1100779126>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-1100779126>]
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.317 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.317 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.317 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.317 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:39:12.317247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.317 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.read.requests volume: 1212 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.318 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.319 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.319 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.320 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.320 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.320 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.320 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.321 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.321 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.usage volume: 30146560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.322 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:39:12.321082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.322 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.323 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.323 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.324 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.325 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.326 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:39:12.324807) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.325 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.326 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.326 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.327 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.327 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.327 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.write.bytes volume: 311296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.328 14 DEBUG ceilometer.compute.pollsters [-] 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.328 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:39:12.327361) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.329 14 DEBUG ceilometer.compute.pollsters [-] cb018734-6031-42f0-98a2-1cd3bfd95c69/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.330 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:39:12.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:39:13 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:13.998 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:39:14 compute-0 podman[251838]: 2026-01-27 15:39:14.327923855 +0000 UTC m=+0.078792454 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., container_name=openstack_network_exporter, version=9.6, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter)
Jan 27 15:39:14 compute-0 podman[251836]: 2026-01-27 15:39:14.3467305 +0000 UTC m=+0.103704452 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126)
Jan 27 15:39:14 compute-0 podman[251837]: 2026-01-27 15:39:14.3821527 +0000 UTC m=+0.137237272 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:39:14 compute-0 nova_compute[185191]: 2026-01-27 15:39:14.915 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:16 compute-0 nova_compute[185191]: 2026-01-27 15:39:16.034 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:17 compute-0 nova_compute[185191]: 2026-01-27 15:39:17.403 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:19 compute-0 nova_compute[185191]: 2026-01-27 15:39:19.919 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:21 compute-0 nova_compute[185191]: 2026-01-27 15:39:21.037 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:21 compute-0 ovn_controller[97541]: 2026-01-27T15:39:21Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:8b:a2 10.100.0.6
Jan 27 15:39:21 compute-0 ovn_controller[97541]: 2026-01-27T15:39:21Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:8b:a2 10.100.0.6
Jan 27 15:39:22 compute-0 podman[251912]: 2026-01-27 15:39:22.310252266 +0000 UTC m=+0.064210723 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:39:24 compute-0 nova_compute[185191]: 2026-01-27 15:39:24.924 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:26 compute-0 nova_compute[185191]: 2026-01-27 15:39:26.039 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:26 compute-0 podman[251931]: 2026-01-27 15:39:26.304366275 +0000 UTC m=+0.057341448 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:39:26 compute-0 podman[251930]: 2026-01-27 15:39:26.333221649 +0000 UTC m=+0.091620868 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, release=1214.1726694543, name=ubi9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler)
Jan 27 15:39:26 compute-0 nova_compute[185191]: 2026-01-27 15:39:26.357 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:27 compute-0 nova_compute[185191]: 2026-01-27 15:39:27.928 185195 INFO nova.compute.manager [None req-26de7b4f-410c-4809-adf3-ed4c88587200 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Get console output
Jan 27 15:39:28 compute-0 nova_compute[185191]: 2026-01-27 15:39:28.023 238468 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 15:39:29 compute-0 podman[201073]: time="2026-01-27T15:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:39:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 27 15:39:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4849 "" "Go-http-client/1.1"
Jan 27 15:39:29 compute-0 nova_compute[185191]: 2026-01-27 15:39:29.928 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:30 compute-0 podman[251972]: 2026-01-27 15:39:30.328430058 +0000 UTC m=+0.084436696 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:39:31 compute-0 nova_compute[185191]: 2026-01-27 15:39:31.043 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:31 compute-0 nova_compute[185191]: 2026-01-27 15:39:31.194 185195 DEBUG nova.compute.manager [req-140431ff-d93e-41df-9655-125e06f50cda req-9fd81f12-3afe-4a64-b363-6e9f22ecdb2f 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Received event network-changed-b3766198-88ae-43c4-8f5d-53661a568cde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:39:31 compute-0 nova_compute[185191]: 2026-01-27 15:39:31.194 185195 DEBUG nova.compute.manager [req-140431ff-d93e-41df-9655-125e06f50cda req-9fd81f12-3afe-4a64-b363-6e9f22ecdb2f 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Refreshing instance network info cache due to event network-changed-b3766198-88ae-43c4-8f5d-53661a568cde. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:39:31 compute-0 nova_compute[185191]: 2026-01-27 15:39:31.195 185195 DEBUG oslo_concurrency.lockutils [req-140431ff-d93e-41df-9655-125e06f50cda req-9fd81f12-3afe-4a64-b363-6e9f22ecdb2f 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-cb018734-6031-42f0-98a2-1cd3bfd95c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:39:31 compute-0 nova_compute[185191]: 2026-01-27 15:39:31.195 185195 DEBUG oslo_concurrency.lockutils [req-140431ff-d93e-41df-9655-125e06f50cda req-9fd81f12-3afe-4a64-b363-6e9f22ecdb2f 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-cb018734-6031-42f0-98a2-1cd3bfd95c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:39:31 compute-0 nova_compute[185191]: 2026-01-27 15:39:31.195 185195 DEBUG nova.network.neutron [req-140431ff-d93e-41df-9655-125e06f50cda req-9fd81f12-3afe-4a64-b363-6e9f22ecdb2f 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Refreshing network info cache for port b3766198-88ae-43c4-8f5d-53661a568cde _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:39:31 compute-0 openstack_network_exporter[204239]: ERROR   15:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:39:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:39:31 compute-0 openstack_network_exporter[204239]: ERROR   15:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:39:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:39:34 compute-0 nova_compute[185191]: 2026-01-27 15:39:34.793 185195 DEBUG oslo_concurrency.lockutils [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Acquiring lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:39:34 compute-0 nova_compute[185191]: 2026-01-27 15:39:34.794 185195 DEBUG oslo_concurrency.lockutils [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:39:34 compute-0 nova_compute[185191]: 2026-01-27 15:39:34.794 185195 DEBUG oslo_concurrency.lockutils [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Acquiring lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:39:34 compute-0 nova_compute[185191]: 2026-01-27 15:39:34.794 185195 DEBUG oslo_concurrency.lockutils [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:39:34 compute-0 nova_compute[185191]: 2026-01-27 15:39:34.794 185195 DEBUG oslo_concurrency.lockutils [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:39:34 compute-0 nova_compute[185191]: 2026-01-27 15:39:34.795 185195 INFO nova.compute.manager [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Terminating instance
Jan 27 15:39:34 compute-0 nova_compute[185191]: 2026-01-27 15:39:34.796 185195 DEBUG nova.compute.manager [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 15:39:34 compute-0 kernel: tapc4e14112-ad (unregistering): left promiscuous mode
Jan 27 15:39:34 compute-0 NetworkManager[56090]: <info>  [1769528374.8304] device (tapc4e14112-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 15:39:34 compute-0 ovn_controller[97541]: 2026-01-27T15:39:34Z|00138|binding|INFO|Releasing lport c4e14112-ad85-4d49-92a0-fa577e5760f3 from this chassis (sb_readonly=0)
Jan 27 15:39:34 compute-0 ovn_controller[97541]: 2026-01-27T15:39:34Z|00139|binding|INFO|Setting lport c4e14112-ad85-4d49-92a0-fa577e5760f3 down in Southbound
Jan 27 15:39:34 compute-0 ovn_controller[97541]: 2026-01-27T15:39:34Z|00140|binding|INFO|Removing iface tapc4e14112-ad ovn-installed in OVS
Jan 27 15:39:34 compute-0 nova_compute[185191]: 2026-01-27 15:39:34.842 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:34 compute-0 nova_compute[185191]: 2026-01-27 15:39:34.855 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:34 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 27 15:39:34 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000009.scope: Consumed 42.716s CPU time.
Jan 27 15:39:34 compute-0 systemd-machined[156506]: Machine qemu-11-instance-00000009 terminated.
Jan 27 15:39:34 compute-0 nova_compute[185191]: 2026-01-27 15:39:34.931 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:35 compute-0 nova_compute[185191]: 2026-01-27 15:39:35.064 185195 INFO nova.virt.libvirt.driver [-] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Instance destroyed successfully.
Jan 27 15:39:35 compute-0 nova_compute[185191]: 2026-01-27 15:39:35.064 185195 DEBUG nova.objects.instance [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lazy-loading 'resources' on Instance uuid 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:39:36 compute-0 nova_compute[185191]: 2026-01-27 15:39:36.045 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:36 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:36.229 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:a8:c7 10.100.0.9'], port_security=['fa:16:3e:a4:a8:c7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48bde8d1-e906-4909-996e-97d5280dcfb1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '85bd0617549142039dbe55541a8fece5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '82dd7f40-eb0d-42c8-9980-11f2bbab4495', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdd63418-506a-4397-9a84-8a1d6706b561, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=c4e14112-ad85-4d49-92a0-fa577e5760f3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:39:36 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:36.231 106793 INFO neutron.agent.ovn.metadata.agent [-] Port c4e14112-ad85-4d49-92a0-fa577e5760f3 in datapath 48bde8d1-e906-4909-996e-97d5280dcfb1 unbound from our chassis
Jan 27 15:39:36 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:36.232 106793 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48bde8d1-e906-4909-996e-97d5280dcfb1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 15:39:36 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:36.233 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[98f58f8d-cebc-4254-af74-e29cc0e7e445]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:39:36 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:36.234 106793 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1 namespace which is not needed anymore
Jan 27 15:39:36 compute-0 neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1[251414]: [NOTICE]   (251418) : haproxy version is 2.8.14-c23fe91
Jan 27 15:39:36 compute-0 neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1[251414]: [NOTICE]   (251418) : path to executable is /usr/sbin/haproxy
Jan 27 15:39:36 compute-0 neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1[251414]: [WARNING]  (251418) : Exiting Master process...
Jan 27 15:39:36 compute-0 neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1[251414]: [ALERT]    (251418) : Current worker (251420) exited with code 143 (Terminated)
Jan 27 15:39:36 compute-0 neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1[251414]: [WARNING]  (251418) : All workers exited. Exiting... (0)
Jan 27 15:39:36 compute-0 systemd[1]: libpod-33cc1ea7e1fb5b79831d1199d0be0702204da631a6b21d8aa0d90a010535c2b3.scope: Deactivated successfully.
Jan 27 15:39:36 compute-0 podman[252035]: 2026-01-27 15:39:36.417018084 +0000 UTC m=+0.082281918 container died 33cc1ea7e1fb5b79831d1199d0be0702204da631a6b21d8aa0d90a010535c2b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 15:39:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-33cc1ea7e1fb5b79831d1199d0be0702204da631a6b21d8aa0d90a010535c2b3-userdata-shm.mount: Deactivated successfully.
Jan 27 15:39:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e52f49e1a12b6ce4d449ba33db0f3b5227dfa41c882ce4a8c10434048ad9a8b8-merged.mount: Deactivated successfully.
Jan 27 15:39:36 compute-0 podman[252035]: 2026-01-27 15:39:36.483050665 +0000 UTC m=+0.148314499 container cleanup 33cc1ea7e1fb5b79831d1199d0be0702204da631a6b21d8aa0d90a010535c2b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 15:39:36 compute-0 systemd[1]: libpod-conmon-33cc1ea7e1fb5b79831d1199d0be0702204da631a6b21d8aa0d90a010535c2b3.scope: Deactivated successfully.
Jan 27 15:39:36 compute-0 podman[252063]: 2026-01-27 15:39:36.586046377 +0000 UTC m=+0.076971185 container remove 33cc1ea7e1fb5b79831d1199d0be0702204da631a6b21d8aa0d90a010535c2b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:39:36 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:36.594 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[669ea988-884d-4b1d-9a21-6758b68aaac2]: (4, ('Tue Jan 27 03:39:36 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1 (33cc1ea7e1fb5b79831d1199d0be0702204da631a6b21d8aa0d90a010535c2b3)\n33cc1ea7e1fb5b79831d1199d0be0702204da631a6b21d8aa0d90a010535c2b3\nTue Jan 27 03:39:36 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1 (33cc1ea7e1fb5b79831d1199d0be0702204da631a6b21d8aa0d90a010535c2b3)\n33cc1ea7e1fb5b79831d1199d0be0702204da631a6b21d8aa0d90a010535c2b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:39:36 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:36.597 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[cb187e93-7b7a-41b8-bc87-a5fe06e13c3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:39:36 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:36.598 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48bde8d1-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:39:36 compute-0 nova_compute[185191]: 2026-01-27 15:39:36.600 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:36 compute-0 kernel: tap48bde8d1-e0: left promiscuous mode
Jan 27 15:39:36 compute-0 nova_compute[185191]: 2026-01-27 15:39:36.620 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:36 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:36.622 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0b42efc6-48a7-41ba-b483-b2b6e4aabee1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:39:36 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:36.637 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f1013ca1-25cb-4d59-a3e9-87d88a48dcb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:39:36 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:36.639 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[acb5ac3e-4c0f-486b-9094-1afe1faf2c87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:39:36 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:36.655 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2d61230e-dac9-443d-a9b8-3939b768afd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587762, 'reachable_time': 39859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252081, 'error': None, 'target': 'ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:39:36 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:36.658 107308 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48bde8d1-e906-4909-996e-97d5280dcfb1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 15:39:36 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:39:36.658 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[8061f2de-4ba9-4c11-a135-a3bb1d3ef75f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:39:36 compute-0 systemd[1]: run-netns-ovnmeta\x2d48bde8d1\x2de906\x2d4909\x2d996e\x2d97d5280dcfb1.mount: Deactivated successfully.
Jan 27 15:39:36 compute-0 nova_compute[185191]: 2026-01-27 15:39:36.685 185195 DEBUG nova.virt.libvirt.vif [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:36:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1366686872',display_name='tempest-ServerActionsTestJSON-server-1366686872',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1366686872',id=9,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOOG6+xlzVgXR460fH4wCBDfSZ+Bqzod+T+TwINETdjxfNX82OuoN42CwFP5m4Wq/GmFxEISV/cN9fFUJXMVe/yMQysH0bvTBYF3s0nsMzz6e7cmVx9K1BA1d07EqkEl/g==',key_name='tempest-keypair-1606409313',keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:37:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='85bd0617549142039dbe55541a8fece5',ramdisk_id='',reservation_id='r-fqvroo3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1260809908',owner_user_name='tempest-ServerActionsTestJSON-1260809908-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T15:38:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37fdc28d88dc42689e835e91aad4c2d3',uuid=2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 15:39:36 compute-0 nova_compute[185191]: 2026-01-27 15:39:36.685 185195 DEBUG nova.network.os_vif_util [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Converting VIF {"id": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "address": "fa:16:3e:a4:a8:c7", "network": {"id": "48bde8d1-e906-4909-996e-97d5280dcfb1", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-232594074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd0617549142039dbe55541a8fece5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4e14112-ad", "ovs_interfaceid": "c4e14112-ad85-4d49-92a0-fa577e5760f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:39:36 compute-0 nova_compute[185191]: 2026-01-27 15:39:36.686 185195 DEBUG nova.network.os_vif_util [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:a8:c7,bridge_name='br-int',has_traffic_filtering=True,id=c4e14112-ad85-4d49-92a0-fa577e5760f3,network=Network(48bde8d1-e906-4909-996e-97d5280dcfb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e14112-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:39:36 compute-0 nova_compute[185191]: 2026-01-27 15:39:36.686 185195 DEBUG os_vif [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:a8:c7,bridge_name='br-int',has_traffic_filtering=True,id=c4e14112-ad85-4d49-92a0-fa577e5760f3,network=Network(48bde8d1-e906-4909-996e-97d5280dcfb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e14112-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 15:39:36 compute-0 nova_compute[185191]: 2026-01-27 15:39:36.688 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:36 compute-0 nova_compute[185191]: 2026-01-27 15:39:36.688 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4e14112-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:39:36 compute-0 nova_compute[185191]: 2026-01-27 15:39:36.690 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:36 compute-0 nova_compute[185191]: 2026-01-27 15:39:36.692 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:36 compute-0 nova_compute[185191]: 2026-01-27 15:39:36.695 185195 INFO os_vif [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:a8:c7,bridge_name='br-int',has_traffic_filtering=True,id=c4e14112-ad85-4d49-92a0-fa577e5760f3,network=Network(48bde8d1-e906-4909-996e-97d5280dcfb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4e14112-ad')
Jan 27 15:39:36 compute-0 nova_compute[185191]: 2026-01-27 15:39:36.696 185195 INFO nova.virt.libvirt.driver [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Deleting instance files /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a_del
Jan 27 15:39:36 compute-0 nova_compute[185191]: 2026-01-27 15:39:36.697 185195 INFO nova.virt.libvirt.driver [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Deletion of /var/lib/nova/instances/2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a_del complete
Jan 27 15:39:36 compute-0 nova_compute[185191]: 2026-01-27 15:39:36.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:39:37 compute-0 nova_compute[185191]: 2026-01-27 15:39:37.065 185195 INFO nova.compute.manager [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Took 2.27 seconds to destroy the instance on the hypervisor.
Jan 27 15:39:37 compute-0 nova_compute[185191]: 2026-01-27 15:39:37.066 185195 DEBUG oslo.service.loopingcall [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 15:39:37 compute-0 nova_compute[185191]: 2026-01-27 15:39:37.066 185195 DEBUG nova.compute.manager [-] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 15:39:37 compute-0 nova_compute[185191]: 2026-01-27 15:39:37.067 185195 DEBUG nova.network.neutron [-] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 15:39:37 compute-0 nova_compute[185191]: 2026-01-27 15:39:37.084 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:39:37 compute-0 nova_compute[185191]: 2026-01-27 15:39:37.085 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:39:37 compute-0 nova_compute[185191]: 2026-01-27 15:39:37.085 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:39:37 compute-0 nova_compute[185191]: 2026-01-27 15:39:37.085 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:39:37 compute-0 nova_compute[185191]: 2026-01-27 15:39:37.578 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:39:37 compute-0 nova_compute[185191]: 2026-01-27 15:39:37.646 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:39:37 compute-0 nova_compute[185191]: 2026-01-27 15:39:37.647 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:39:37 compute-0 nova_compute[185191]: 2026-01-27 15:39:37.709 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:39:37 compute-0 nova_compute[185191]: 2026-01-27 15:39:37.871 185195 DEBUG nova.network.neutron [req-140431ff-d93e-41df-9655-125e06f50cda req-9fd81f12-3afe-4a64-b363-6e9f22ecdb2f 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Updated VIF entry in instance network info cache for port b3766198-88ae-43c4-8f5d-53661a568cde. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:39:37 compute-0 nova_compute[185191]: 2026-01-27 15:39:37.872 185195 DEBUG nova.network.neutron [req-140431ff-d93e-41df-9655-125e06f50cda req-9fd81f12-3afe-4a64-b363-6e9f22ecdb2f 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Updating instance_info_cache with network_info: [{"id": "b3766198-88ae-43c4-8f5d-53661a568cde", "address": "fa:16:3e:3a:8b:a2", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3766198-88", "ovs_interfaceid": "b3766198-88ae-43c4-8f5d-53661a568cde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:39:38 compute-0 nova_compute[185191]: 2026-01-27 15:39:38.042 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:39:38 compute-0 nova_compute[185191]: 2026-01-27 15:39:38.044 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5149MB free_disk=72.34928512573242GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:39:38 compute-0 nova_compute[185191]: 2026-01-27 15:39:38.045 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:39:38 compute-0 nova_compute[185191]: 2026-01-27 15:39:38.045 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:39:38 compute-0 nova_compute[185191]: 2026-01-27 15:39:38.132 185195 DEBUG oslo_concurrency.lockutils [req-140431ff-d93e-41df-9655-125e06f50cda req-9fd81f12-3afe-4a64-b363-6e9f22ecdb2f 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-cb018734-6031-42f0-98a2-1cd3bfd95c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:39:38 compute-0 nova_compute[185191]: 2026-01-27 15:39:38.182 185195 DEBUG nova.compute.manager [req-1e3ed6ab-378d-4e28-abda-b1c5ac467738 req-b87721e3-63aa-48ac-a9ae-641a386a0b69 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received event network-vif-unplugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:39:38 compute-0 nova_compute[185191]: 2026-01-27 15:39:38.183 185195 DEBUG oslo_concurrency.lockutils [req-1e3ed6ab-378d-4e28-abda-b1c5ac467738 req-b87721e3-63aa-48ac-a9ae-641a386a0b69 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:39:38 compute-0 nova_compute[185191]: 2026-01-27 15:39:38.183 185195 DEBUG oslo_concurrency.lockutils [req-1e3ed6ab-378d-4e28-abda-b1c5ac467738 req-b87721e3-63aa-48ac-a9ae-641a386a0b69 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:39:38 compute-0 nova_compute[185191]: 2026-01-27 15:39:38.183 185195 DEBUG oslo_concurrency.lockutils [req-1e3ed6ab-378d-4e28-abda-b1c5ac467738 req-b87721e3-63aa-48ac-a9ae-641a386a0b69 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:39:38 compute-0 nova_compute[185191]: 2026-01-27 15:39:38.184 185195 DEBUG nova.compute.manager [req-1e3ed6ab-378d-4e28-abda-b1c5ac467738 req-b87721e3-63aa-48ac-a9ae-641a386a0b69 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] No waiting events found dispatching network-vif-unplugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:39:38 compute-0 nova_compute[185191]: 2026-01-27 15:39:38.184 185195 DEBUG nova.compute.manager [req-1e3ed6ab-378d-4e28-abda-b1c5ac467738 req-b87721e3-63aa-48ac-a9ae-641a386a0b69 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received event network-vif-unplugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 15:39:38 compute-0 nova_compute[185191]: 2026-01-27 15:39:38.875 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:39:38 compute-0 nova_compute[185191]: 2026-01-27 15:39:38.876 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance cb018734-6031-42f0-98a2-1cd3bfd95c69 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:39:38 compute-0 nova_compute[185191]: 2026-01-27 15:39:38.876 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:39:38 compute-0 nova_compute[185191]: 2026-01-27 15:39:38.876 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:39:39 compute-0 nova_compute[185191]: 2026-01-27 15:39:39.999 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:39:40 compute-0 nova_compute[185191]: 2026-01-27 15:39:40.112 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:39:40 compute-0 nova_compute[185191]: 2026-01-27 15:39:40.988 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:39:40 compute-0 nova_compute[185191]: 2026-01-27 15:39:40.989 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:39:41 compute-0 nova_compute[185191]: 2026-01-27 15:39:41.048 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:41 compute-0 nova_compute[185191]: 2026-01-27 15:39:41.216 185195 DEBUG nova.compute.manager [req-46755bf7-af01-439a-96bd-fd8d348d6636 req-0cfd6360-7fd4-4679-8655-046bf265f6b2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received event network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:39:41 compute-0 nova_compute[185191]: 2026-01-27 15:39:41.217 185195 DEBUG oslo_concurrency.lockutils [req-46755bf7-af01-439a-96bd-fd8d348d6636 req-0cfd6360-7fd4-4679-8655-046bf265f6b2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:39:41 compute-0 nova_compute[185191]: 2026-01-27 15:39:41.217 185195 DEBUG oslo_concurrency.lockutils [req-46755bf7-af01-439a-96bd-fd8d348d6636 req-0cfd6360-7fd4-4679-8655-046bf265f6b2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:39:41 compute-0 nova_compute[185191]: 2026-01-27 15:39:41.218 185195 DEBUG oslo_concurrency.lockutils [req-46755bf7-af01-439a-96bd-fd8d348d6636 req-0cfd6360-7fd4-4679-8655-046bf265f6b2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:39:41 compute-0 nova_compute[185191]: 2026-01-27 15:39:41.218 185195 DEBUG nova.compute.manager [req-46755bf7-af01-439a-96bd-fd8d348d6636 req-0cfd6360-7fd4-4679-8655-046bf265f6b2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] No waiting events found dispatching network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:39:41 compute-0 nova_compute[185191]: 2026-01-27 15:39:41.218 185195 WARNING nova.compute.manager [req-46755bf7-af01-439a-96bd-fd8d348d6636 req-0cfd6360-7fd4-4679-8655-046bf265f6b2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received unexpected event network-vif-plugged-c4e14112-ad85-4d49-92a0-fa577e5760f3 for instance with vm_state active and task_state deleting.
Jan 27 15:39:41 compute-0 nova_compute[185191]: 2026-01-27 15:39:41.347 185195 DEBUG nova.network.neutron [-] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:39:41 compute-0 nova_compute[185191]: 2026-01-27 15:39:41.539 185195 DEBUG nova.compute.manager [req-4becdfe7-8366-4789-873c-c9ca01e04bf0 req-40aa11b6-8acb-4954-9b95-ce194f49bcff 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Received event network-vif-deleted-c4e14112-ad85-4d49-92a0-fa577e5760f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:39:41 compute-0 nova_compute[185191]: 2026-01-27 15:39:41.540 185195 INFO nova.compute.manager [req-4becdfe7-8366-4789-873c-c9ca01e04bf0 req-40aa11b6-8acb-4954-9b95-ce194f49bcff 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Neutron deleted interface c4e14112-ad85-4d49-92a0-fa577e5760f3; detaching it from the instance and deleting it from the info cache
Jan 27 15:39:41 compute-0 nova_compute[185191]: 2026-01-27 15:39:41.540 185195 DEBUG nova.network.neutron [req-4becdfe7-8366-4789-873c-c9ca01e04bf0 req-40aa11b6-8acb-4954-9b95-ce194f49bcff 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:39:41 compute-0 nova_compute[185191]: 2026-01-27 15:39:41.581 185195 INFO nova.compute.manager [-] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Took 4.51 seconds to deallocate network for instance.
Jan 27 15:39:41 compute-0 nova_compute[185191]: 2026-01-27 15:39:41.690 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:41 compute-0 nova_compute[185191]: 2026-01-27 15:39:41.713 185195 DEBUG nova.compute.manager [req-4becdfe7-8366-4789-873c-c9ca01e04bf0 req-40aa11b6-8acb-4954-9b95-ce194f49bcff 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Detach interface failed, port_id=c4e14112-ad85-4d49-92a0-fa577e5760f3, reason: Instance 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 15:39:41 compute-0 nova_compute[185191]: 2026-01-27 15:39:41.889 185195 DEBUG oslo_concurrency.lockutils [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:39:41 compute-0 nova_compute[185191]: 2026-01-27 15:39:41.890 185195 DEBUG oslo_concurrency.lockutils [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:39:42 compute-0 nova_compute[185191]: 2026-01-27 15:39:42.011 185195 DEBUG nova.compute.provider_tree [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:39:42 compute-0 nova_compute[185191]: 2026-01-27 15:39:42.104 185195 DEBUG nova.scheduler.client.report [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:39:42 compute-0 nova_compute[185191]: 2026-01-27 15:39:42.282 185195 DEBUG oslo_concurrency.lockutils [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:39:42 compute-0 podman[252089]: 2026-01-27 15:39:42.294803117 +0000 UTC m=+0.053986988 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 15:39:42 compute-0 nova_compute[185191]: 2026-01-27 15:39:42.396 185195 INFO nova.scheduler.client.report [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Deleted allocations for instance 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a
Jan 27 15:39:43 compute-0 nova_compute[185191]: 2026-01-27 15:39:43.037 185195 DEBUG oslo_concurrency.lockutils [None req-031656f8-f3a8-4b28-b42c-ecc5beddf453 37fdc28d88dc42689e835e91aad4c2d3 85bd0617549142039dbe55541a8fece5 - - default default] Lock "2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:39:44 compute-0 podman[252108]: 2026-01-27 15:39:44.761126157 +0000 UTC m=+0.077093669 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 15:39:44 compute-0 podman[252110]: 2026-01-27 15:39:44.774004532 +0000 UTC m=+0.080703115 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter)
Jan 27 15:39:44 compute-0 podman[252109]: 2026-01-27 15:39:44.813622965 +0000 UTC m=+0.126077103 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 15:39:46 compute-0 nova_compute[185191]: 2026-01-27 15:39:46.052 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:46 compute-0 nova_compute[185191]: 2026-01-27 15:39:46.319 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:46 compute-0 nova_compute[185191]: 2026-01-27 15:39:46.694 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:49 compute-0 nova_compute[185191]: 2026-01-27 15:39:49.990 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:39:49 compute-0 nova_compute[185191]: 2026-01-27 15:39:49.992 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:39:49 compute-0 nova_compute[185191]: 2026-01-27 15:39:49.993 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:39:49 compute-0 nova_compute[185191]: 2026-01-27 15:39:49.993 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:39:50 compute-0 nova_compute[185191]: 2026-01-27 15:39:50.061 185195 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769528375.0592473, 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:39:50 compute-0 nova_compute[185191]: 2026-01-27 15:39:50.062 185195 INFO nova.compute.manager [-] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] VM Stopped (Lifecycle Event)
Jan 27 15:39:50 compute-0 nova_compute[185191]: 2026-01-27 15:39:50.093 185195 DEBUG nova.compute.manager [None req-4913c199-9d4e-4eff-ae49-b042bc64fc3b - - - - - -] [instance: 2104dc2f-7d60-431b-9fc9-d32ff2ff1a4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:39:50 compute-0 ovn_controller[97541]: 2026-01-27T15:39:50Z|00141|binding|INFO|Releasing lport 7867416a-c1b6-4934-a0ce-b1255fa030c3 from this chassis (sb_readonly=0)
Jan 27 15:39:50 compute-0 nova_compute[185191]: 2026-01-27 15:39:50.198 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:50 compute-0 ovn_controller[97541]: 2026-01-27T15:39:50Z|00142|binding|INFO|Releasing lport 7867416a-c1b6-4934-a0ce-b1255fa030c3 from this chassis (sb_readonly=0)
Jan 27 15:39:50 compute-0 nova_compute[185191]: 2026-01-27 15:39:50.365 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:51 compute-0 nova_compute[185191]: 2026-01-27 15:39:51.054 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:51 compute-0 nova_compute[185191]: 2026-01-27 15:39:51.696 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:52 compute-0 nova_compute[185191]: 2026-01-27 15:39:52.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:39:52 compute-0 nova_compute[185191]: 2026-01-27 15:39:52.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.245 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.245 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.269 185195 DEBUG nova.compute.manager [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 15:39:53 compute-0 podman[252173]: 2026-01-27 15:39:53.322805223 +0000 UTC m=+0.077234483 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.344 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.345 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.357 185195 DEBUG nova.virt.hardware [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.357 185195 INFO nova.compute.claims [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Claim successful on node compute-0.ctlplane.example.com
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.569 185195 DEBUG nova.compute.provider_tree [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.598 185195 DEBUG nova.scheduler.client.report [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.659 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.659 185195 DEBUG nova.compute.manager [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.715 185195 DEBUG nova.compute.manager [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.716 185195 DEBUG nova.network.neutron [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.759 185195 INFO nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.792 185195 DEBUG nova.compute.manager [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.853 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-cb018734-6031-42f0-98a2-1cd3bfd95c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.853 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-cb018734-6031-42f0-98a2-1cd3bfd95c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.853 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.964 185195 DEBUG nova.compute.manager [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.965 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.965 185195 INFO nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Creating image(s)
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.966 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "/var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.966 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "/var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.967 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "/var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:39:53 compute-0 nova_compute[185191]: 2026-01-27 15:39:53.981 185195 DEBUG oslo_concurrency.processutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.006 185195 DEBUG nova.policy [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5debc8bd8b947ef8b11b0edb9d8624e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff135d375334408199a41eb5e406fa31', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.048 185195 DEBUG oslo_concurrency.processutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.049 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.050 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.063 185195 DEBUG oslo_concurrency.processutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.122 185195 DEBUG oslo_concurrency.processutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.123 185195 DEBUG oslo_concurrency.processutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024,backing_fmt=raw /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.178 185195 DEBUG oslo_concurrency.processutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024,backing_fmt=raw /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk 1073741824" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.179 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.180 185195 DEBUG oslo_concurrency.processutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.254 185195 DEBUG oslo_concurrency.processutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.255 185195 DEBUG nova.virt.disk.api [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Checking if we can resize image /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.256 185195 DEBUG oslo_concurrency.processutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.320 185195 DEBUG oslo_concurrency.processutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.322 185195 DEBUG nova.virt.disk.api [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Cannot resize image /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.323 185195 DEBUG nova.objects.instance [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lazy-loading 'migration_context' on Instance uuid c6a8ebba-2d8f-4d9c-b173-65a0b035bf25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.362 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.363 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Ensure instance console log exists: /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.363 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.364 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:39:54 compute-0 nova_compute[185191]: 2026-01-27 15:39:54.364 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:39:55 compute-0 nova_compute[185191]: 2026-01-27 15:39:55.877 185195 DEBUG nova.network.neutron [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Successfully created port: 813e4105-c4d2-422b-930b-0f60d675471e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 15:39:56 compute-0 nova_compute[185191]: 2026-01-27 15:39:56.056 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:56 compute-0 nova_compute[185191]: 2026-01-27 15:39:56.417 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Updating instance_info_cache with network_info: [{"id": "b3766198-88ae-43c4-8f5d-53661a568cde", "address": "fa:16:3e:3a:8b:a2", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3766198-88", "ovs_interfaceid": "b3766198-88ae-43c4-8f5d-53661a568cde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:39:56 compute-0 nova_compute[185191]: 2026-01-27 15:39:56.582 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-cb018734-6031-42f0-98a2-1cd3bfd95c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:39:56 compute-0 nova_compute[185191]: 2026-01-27 15:39:56.582 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:39:56 compute-0 nova_compute[185191]: 2026-01-27 15:39:56.583 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:39:56 compute-0 nova_compute[185191]: 2026-01-27 15:39:56.583 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:39:56 compute-0 nova_compute[185191]: 2026-01-27 15:39:56.584 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:39:56 compute-0 nova_compute[185191]: 2026-01-27 15:39:56.698 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:56 compute-0 nova_compute[185191]: 2026-01-27 15:39:56.898 185195 DEBUG nova.network.neutron [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Successfully updated port: 813e4105-c4d2-422b-930b-0f60d675471e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 15:39:57 compute-0 nova_compute[185191]: 2026-01-27 15:39:57.047 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "refresh_cache-c6a8ebba-2d8f-4d9c-b173-65a0b035bf25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:39:57 compute-0 nova_compute[185191]: 2026-01-27 15:39:57.048 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquired lock "refresh_cache-c6a8ebba-2d8f-4d9c-b173-65a0b035bf25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:39:57 compute-0 nova_compute[185191]: 2026-01-27 15:39:57.048 185195 DEBUG nova.network.neutron [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:39:57 compute-0 nova_compute[185191]: 2026-01-27 15:39:57.235 185195 DEBUG nova.compute.manager [req-f7ee72bb-667b-4e8d-a034-baaf69f7513d req-9150a2a2-77c2-454c-a470-91d8b256cb41 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Received event network-changed-813e4105-c4d2-422b-930b-0f60d675471e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:39:57 compute-0 nova_compute[185191]: 2026-01-27 15:39:57.236 185195 DEBUG nova.compute.manager [req-f7ee72bb-667b-4e8d-a034-baaf69f7513d req-9150a2a2-77c2-454c-a470-91d8b256cb41 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Refreshing instance network info cache due to event network-changed-813e4105-c4d2-422b-930b-0f60d675471e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:39:57 compute-0 nova_compute[185191]: 2026-01-27 15:39:57.236 185195 DEBUG oslo_concurrency.lockutils [req-f7ee72bb-667b-4e8d-a034-baaf69f7513d req-9150a2a2-77c2-454c-a470-91d8b256cb41 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-c6a8ebba-2d8f-4d9c-b173-65a0b035bf25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:39:57 compute-0 podman[252210]: 2026-01-27 15:39:57.339457696 +0000 UTC m=+0.091439653 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:39:57 compute-0 podman[252209]: 2026-01-27 15:39:57.3712818 +0000 UTC m=+0.115097488 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, release-0.7.12=, build-date=2024-09-18T21:23:30, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9, vendor=Red Hat, Inc., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:39:57 compute-0 nova_compute[185191]: 2026-01-27 15:39:57.382 185195 DEBUG nova.network.neutron [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 15:39:59 compute-0 podman[201073]: time="2026-01-27T15:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:39:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:39:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.877 185195 DEBUG nova.network.neutron [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Updating instance_info_cache with network_info: [{"id": "813e4105-c4d2-422b-930b-0f60d675471e", "address": "fa:16:3e:95:fd:e5", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap813e4105-c4", "ovs_interfaceid": "813e4105-c4d2-422b-930b-0f60d675471e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.906 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Releasing lock "refresh_cache-c6a8ebba-2d8f-4d9c-b173-65a0b035bf25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.907 185195 DEBUG nova.compute.manager [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Instance network_info: |[{"id": "813e4105-c4d2-422b-930b-0f60d675471e", "address": "fa:16:3e:95:fd:e5", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap813e4105-c4", "ovs_interfaceid": "813e4105-c4d2-422b-930b-0f60d675471e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.907 185195 DEBUG oslo_concurrency.lockutils [req-f7ee72bb-667b-4e8d-a034-baaf69f7513d req-9150a2a2-77c2-454c-a470-91d8b256cb41 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-c6a8ebba-2d8f-4d9c-b173-65a0b035bf25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.908 185195 DEBUG nova.network.neutron [req-f7ee72bb-667b-4e8d-a034-baaf69f7513d req-9150a2a2-77c2-454c-a470-91d8b256cb41 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Refreshing network info cache for port 813e4105-c4d2-422b-930b-0f60d675471e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.911 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Start _get_guest_xml network_info=[{"id": "813e4105-c4d2-422b-930b-0f60d675471e", "address": "fa:16:3e:95:fd:e5", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap813e4105-c4", "ovs_interfaceid": "813e4105-c4d2-422b-930b-0f60d675471e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:34:19Z,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:34:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': 'fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.918 185195 WARNING nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.924 185195 DEBUG nova.virt.libvirt.host [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.925 185195 DEBUG nova.virt.libvirt.host [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.929 185195 DEBUG nova.virt.libvirt.host [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.930 185195 DEBUG nova.virt.libvirt.host [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.930 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.931 185195 DEBUG nova.virt.hardware [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T15:34:18Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='aed09843-3292-40b2-b829-c4ed118e135f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:34:19Z,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:34:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.932 185195 DEBUG nova.virt.hardware [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.932 185195 DEBUG nova.virt.hardware [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.932 185195 DEBUG nova.virt.hardware [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.933 185195 DEBUG nova.virt.hardware [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.933 185195 DEBUG nova.virt.hardware [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.934 185195 DEBUG nova.virt.hardware [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.934 185195 DEBUG nova.virt.hardware [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.935 185195 DEBUG nova.virt.hardware [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.935 185195 DEBUG nova.virt.hardware [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.936 185195 DEBUG nova.virt.hardware [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.940 185195 DEBUG nova.virt.libvirt.vif [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:39:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-701197020',display_name='tempest-TestNetworkBasicOps-server-701197020',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-701197020',id=12,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOUlp6JafqTPPTj9gctWgt13/hyzuI7pE69zFBWCEnpPttD7IljnNNGlfwUPZFbP4I4yrjIATGZU+V9QLjFjTq2Je/ZYeNB3z0rE4slEuZdtnPGg0CtDoVYmo/nZptAhlQ==',key_name='tempest-TestNetworkBasicOps-242086254',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff135d375334408199a41eb5e406fa31',ramdisk_id='',reservation_id='r-mqf58h0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1734510166',owner_user_name='tempest-TestNetworkBasicOps-1734510166-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:39:53Z,user_data=None,user_id='a5debc8bd8b947ef8b11b0edb9d8624e',uuid=c6a8ebba-2d8f-4d9c-b173-65a0b035bf25,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "813e4105-c4d2-422b-930b-0f60d675471e", "address": "fa:16:3e:95:fd:e5", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap813e4105-c4", "ovs_interfaceid": "813e4105-c4d2-422b-930b-0f60d675471e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.940 185195 DEBUG nova.network.os_vif_util [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Converting VIF {"id": "813e4105-c4d2-422b-930b-0f60d675471e", "address": "fa:16:3e:95:fd:e5", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap813e4105-c4", "ovs_interfaceid": "813e4105-c4d2-422b-930b-0f60d675471e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.941 185195 DEBUG nova.network.os_vif_util [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:fd:e5,bridge_name='br-int',has_traffic_filtering=True,id=813e4105-c4d2-422b-930b-0f60d675471e,network=Network(69348b1d-27dc-488f-b1c0-e5faaa154377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap813e4105-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.942 185195 DEBUG nova.objects.instance [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lazy-loading 'pci_devices' on Instance uuid c6a8ebba-2d8f-4d9c-b173-65a0b035bf25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.957 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] End _get_guest_xml xml=<domain type="kvm">
Jan 27 15:39:59 compute-0 nova_compute[185191]:   <uuid>c6a8ebba-2d8f-4d9c-b173-65a0b035bf25</uuid>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   <name>instance-0000000c</name>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   <memory>131072</memory>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   <vcpu>1</vcpu>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   <metadata>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <nova:name>tempest-TestNetworkBasicOps-server-701197020</nova:name>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <nova:creationTime>2026-01-27 15:39:59</nova:creationTime>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <nova:flavor name="m1.nano">
Jan 27 15:39:59 compute-0 nova_compute[185191]:         <nova:memory>128</nova:memory>
Jan 27 15:39:59 compute-0 nova_compute[185191]:         <nova:disk>1</nova:disk>
Jan 27 15:39:59 compute-0 nova_compute[185191]:         <nova:swap>0</nova:swap>
Jan 27 15:39:59 compute-0 nova_compute[185191]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 15:39:59 compute-0 nova_compute[185191]:         <nova:vcpus>1</nova:vcpus>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       </nova:flavor>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <nova:owner>
Jan 27 15:39:59 compute-0 nova_compute[185191]:         <nova:user uuid="a5debc8bd8b947ef8b11b0edb9d8624e">tempest-TestNetworkBasicOps-1734510166-project-member</nova:user>
Jan 27 15:39:59 compute-0 nova_compute[185191]:         <nova:project uuid="ff135d375334408199a41eb5e406fa31">tempest-TestNetworkBasicOps-1734510166</nova:project>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       </nova:owner>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <nova:root type="image" uuid="fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <nova:ports>
Jan 27 15:39:59 compute-0 nova_compute[185191]:         <nova:port uuid="813e4105-c4d2-422b-930b-0f60d675471e">
Jan 27 15:39:59 compute-0 nova_compute[185191]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:         </nova:port>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       </nova:ports>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     </nova:instance>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   </metadata>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   <sysinfo type="smbios">
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <system>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <entry name="manufacturer">RDO</entry>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <entry name="product">OpenStack Compute</entry>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <entry name="serial">c6a8ebba-2d8f-4d9c-b173-65a0b035bf25</entry>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <entry name="uuid">c6a8ebba-2d8f-4d9c-b173-65a0b035bf25</entry>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <entry name="family">Virtual Machine</entry>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     </system>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   </sysinfo>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   <os>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <boot dev="hd"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <smbios mode="sysinfo"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   </os>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   <features>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <acpi/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <apic/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <vmcoreinfo/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   </features>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   <clock offset="utc">
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <timer name="hpet" present="no"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   </clock>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   <cpu mode="host-model" match="exact">
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <target dev="vda" bus="virtio"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <disk type="file" device="cdrom">
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk.config"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <target dev="sda" bus="sata"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <interface type="ethernet">
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <mac address="fa:16:3e:95:fd:e5"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <mtu size="1442"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <target dev="tap813e4105-c4"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <serial type="pty">
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <log file="/var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/console.log" append="off"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     </serial>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <video>
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     </video>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <input type="tablet" bus="usb"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <rng model="virtio">
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <backend model="random">/dev/urandom</backend>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <controller type="usb" index="0"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     <memballoon model="virtio">
Jan 27 15:39:59 compute-0 nova_compute[185191]:       <stats period="10"/>
Jan 27 15:39:59 compute-0 nova_compute[185191]:     </memballoon>
Jan 27 15:39:59 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:39:59 compute-0 nova_compute[185191]: </domain>
Jan 27 15:39:59 compute-0 nova_compute[185191]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.959 185195 DEBUG nova.compute.manager [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Preparing to wait for external event network-vif-plugged-813e4105-c4d2-422b-930b-0f60d675471e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.959 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.959 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.959 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.960 185195 DEBUG nova.virt.libvirt.vif [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:39:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-701197020',display_name='tempest-TestNetworkBasicOps-server-701197020',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-701197020',id=12,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOUlp6JafqTPPTj9gctWgt13/hyzuI7pE69zFBWCEnpPttD7IljnNNGlfwUPZFbP4I4yrjIATGZU+V9QLjFjTq2Je/ZYeNB3z0rE4slEuZdtnPGg0CtDoVYmo/nZptAhlQ==',key_name='tempest-TestNetworkBasicOps-242086254',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff135d375334408199a41eb5e406fa31',ramdisk_id='',reservation_id='r-mqf58h0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1734510166',owner_user_name='tempest-TestNetworkBasicOps-1734510166-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:39:53Z,user_data=None,user_id='a5debc8bd8b947ef8b11b0edb9d8624e',uuid=c6a8ebba-2d8f-4d9c-b173-65a0b035bf25,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "813e4105-c4d2-422b-930b-0f60d675471e", "address": "fa:16:3e:95:fd:e5", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap813e4105-c4", "ovs_interfaceid": "813e4105-c4d2-422b-930b-0f60d675471e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.960 185195 DEBUG nova.network.os_vif_util [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Converting VIF {"id": "813e4105-c4d2-422b-930b-0f60d675471e", "address": "fa:16:3e:95:fd:e5", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap813e4105-c4", "ovs_interfaceid": "813e4105-c4d2-422b-930b-0f60d675471e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.961 185195 DEBUG nova.network.os_vif_util [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:fd:e5,bridge_name='br-int',has_traffic_filtering=True,id=813e4105-c4d2-422b-930b-0f60d675471e,network=Network(69348b1d-27dc-488f-b1c0-e5faaa154377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap813e4105-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.961 185195 DEBUG os_vif [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:fd:e5,bridge_name='br-int',has_traffic_filtering=True,id=813e4105-c4d2-422b-930b-0f60d675471e,network=Network(69348b1d-27dc-488f-b1c0-e5faaa154377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap813e4105-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.962 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.962 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.962 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.964 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.965 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap813e4105-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.965 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap813e4105-c4, col_values=(('external_ids', {'iface-id': '813e4105-c4d2-422b-930b-0f60d675471e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:fd:e5', 'vm-uuid': 'c6a8ebba-2d8f-4d9c-b173-65a0b035bf25'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.966 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:59 compute-0 NetworkManager[56090]: <info>  [1769528399.9675] manager: (tap813e4105-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.969 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.973 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:39:59 compute-0 nova_compute[185191]: 2026-01-27 15:39:59.973 185195 INFO os_vif [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:fd:e5,bridge_name='br-int',has_traffic_filtering=True,id=813e4105-c4d2-422b-930b-0f60d675471e,network=Network(69348b1d-27dc-488f-b1c0-e5faaa154377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap813e4105-c4')
Jan 27 15:40:00 compute-0 nova_compute[185191]: 2026-01-27 15:40:00.036 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:40:00 compute-0 nova_compute[185191]: 2026-01-27 15:40:00.037 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:40:00 compute-0 nova_compute[185191]: 2026-01-27 15:40:00.045 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] No VIF found with MAC fa:16:3e:95:fd:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 15:40:00 compute-0 nova_compute[185191]: 2026-01-27 15:40:00.046 185195 INFO nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Using config drive
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.261 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.261 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.262 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:00 compute-0 nova_compute[185191]: 2026-01-27 15:40:00.609 185195 INFO nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Creating config drive at /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk.config
Jan 27 15:40:00 compute-0 nova_compute[185191]: 2026-01-27 15:40:00.613 185195 DEBUG oslo_concurrency.processutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps16iexqo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:40:00 compute-0 nova_compute[185191]: 2026-01-27 15:40:00.739 185195 DEBUG oslo_concurrency.processutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps16iexqo" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:40:00 compute-0 kernel: tap813e4105-c4: entered promiscuous mode
Jan 27 15:40:00 compute-0 ovn_controller[97541]: 2026-01-27T15:40:00Z|00143|binding|INFO|Claiming lport 813e4105-c4d2-422b-930b-0f60d675471e for this chassis.
Jan 27 15:40:00 compute-0 ovn_controller[97541]: 2026-01-27T15:40:00Z|00144|binding|INFO|813e4105-c4d2-422b-930b-0f60d675471e: Claiming fa:16:3e:95:fd:e5 10.100.0.13
Jan 27 15:40:00 compute-0 NetworkManager[56090]: <info>  [1769528400.8261] manager: (tap813e4105-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Jan 27 15:40:00 compute-0 nova_compute[185191]: 2026-01-27 15:40:00.826 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.837 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:fd:e5 10.100.0.13'], port_security=['fa:16:3e:95:fd:e5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c6a8ebba-2d8f-4d9c-b173-65a0b035bf25', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69348b1d-27dc-488f-b1c0-e5faaa154377', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff135d375334408199a41eb5e406fa31', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2f9ec478-f259-49a7-97fb-15e0561e7285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03063d12-1719-4bc3-90aa-20f60e1e1459, chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=813e4105-c4d2-422b-930b-0f60d675471e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.838 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 813e4105-c4d2-422b-930b-0f60d675471e in datapath 69348b1d-27dc-488f-b1c0-e5faaa154377 bound to our chassis
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.839 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69348b1d-27dc-488f-b1c0-e5faaa154377
Jan 27 15:40:00 compute-0 ovn_controller[97541]: 2026-01-27T15:40:00Z|00145|binding|INFO|Setting lport 813e4105-c4d2-422b-930b-0f60d675471e ovn-installed in OVS
Jan 27 15:40:00 compute-0 ovn_controller[97541]: 2026-01-27T15:40:00Z|00146|binding|INFO|Setting lport 813e4105-c4d2-422b-930b-0f60d675471e up in Southbound
Jan 27 15:40:00 compute-0 nova_compute[185191]: 2026-01-27 15:40:00.856 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.857 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[09851ce0-17ac-445f-bbd9-9b101286778f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:00 compute-0 systemd-udevd[252280]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:40:00 compute-0 systemd-machined[156506]: New machine qemu-13-instance-0000000c.
Jan 27 15:40:00 compute-0 NetworkManager[56090]: <info>  [1769528400.8864] device (tap813e4105-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 15:40:00 compute-0 NetworkManager[56090]: <info>  [1769528400.8871] device (tap813e4105-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 15:40:00 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.890 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[13c3cc0f-2c97-48f4-92e0-0374717391a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.894 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[dd196909-8a87-4a2d-a4c9-1d891e9355e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.924 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[15e6b8b8-d40a-4122-b2d0-a2073b0be973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.941 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[29988255-aead-45f5-b0fd-b0c3106115c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69348b1d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:98:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589677, 'reachable_time': 39322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252292, 'error': None, 'target': 'ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:00 compute-0 podman[252261]: 2026-01-27 15:40:00.948383577 +0000 UTC m=+0.129098543 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.956 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[44a81dc5-ff4b-4d0d-8151-1afa520eeeb3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap69348b1d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589688, 'tstamp': 589688}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252304, 'error': None, 'target': 'ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap69348b1d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589690, 'tstamp': 589690}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252304, 'error': None, 'target': 'ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.958 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69348b1d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:40:00 compute-0 nova_compute[185191]: 2026-01-27 15:40:00.959 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:00 compute-0 nova_compute[185191]: 2026-01-27 15:40:00.961 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.961 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69348b1d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.961 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.961 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69348b1d-20, col_values=(('external_ids', {'iface-id': '7867416a-c1b6-4934-a0ce-b1255fa030c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:40:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:00.962 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.058 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.271 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528401.2708106, c6a8ebba-2d8f-4d9c-b173-65a0b035bf25 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.272 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] VM Started (Lifecycle Event)
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.308 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.314 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528401.2718463, c6a8ebba-2d8f-4d9c-b173-65a0b035bf25 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.315 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] VM Paused (Lifecycle Event)
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.381 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.388 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:40:01 compute-0 openstack_network_exporter[204239]: ERROR   15:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:40:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:40:01 compute-0 openstack_network_exporter[204239]: ERROR   15:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:40:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.445 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.654 185195 DEBUG nova.network.neutron [req-f7ee72bb-667b-4e8d-a034-baaf69f7513d req-9150a2a2-77c2-454c-a470-91d8b256cb41 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Updated VIF entry in instance network info cache for port 813e4105-c4d2-422b-930b-0f60d675471e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.655 185195 DEBUG nova.network.neutron [req-f7ee72bb-667b-4e8d-a034-baaf69f7513d req-9150a2a2-77c2-454c-a470-91d8b256cb41 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Updating instance_info_cache with network_info: [{"id": "813e4105-c4d2-422b-930b-0f60d675471e", "address": "fa:16:3e:95:fd:e5", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap813e4105-c4", "ovs_interfaceid": "813e4105-c4d2-422b-930b-0f60d675471e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.676 185195 DEBUG oslo_concurrency.lockutils [req-f7ee72bb-667b-4e8d-a034-baaf69f7513d req-9150a2a2-77c2-454c-a470-91d8b256cb41 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-c6a8ebba-2d8f-4d9c-b173-65a0b035bf25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.731 185195 DEBUG nova.compute.manager [req-122c3814-e3df-4415-b84d-e9d0d109d740 req-baa7a608-7312-4096-a507-c1de77714688 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Received event network-vif-plugged-813e4105-c4d2-422b-930b-0f60d675471e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.731 185195 DEBUG oslo_concurrency.lockutils [req-122c3814-e3df-4415-b84d-e9d0d109d740 req-baa7a608-7312-4096-a507-c1de77714688 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.731 185195 DEBUG oslo_concurrency.lockutils [req-122c3814-e3df-4415-b84d-e9d0d109d740 req-baa7a608-7312-4096-a507-c1de77714688 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.731 185195 DEBUG oslo_concurrency.lockutils [req-122c3814-e3df-4415-b84d-e9d0d109d740 req-baa7a608-7312-4096-a507-c1de77714688 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.731 185195 DEBUG nova.compute.manager [req-122c3814-e3df-4415-b84d-e9d0d109d740 req-baa7a608-7312-4096-a507-c1de77714688 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Processing event network-vif-plugged-813e4105-c4d2-422b-930b-0f60d675471e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.732 185195 DEBUG nova.compute.manager [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.736 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528401.7356071, c6a8ebba-2d8f-4d9c-b173-65a0b035bf25 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.736 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] VM Resumed (Lifecycle Event)
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.738 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.744 185195 INFO nova.virt.libvirt.driver [-] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Instance spawned successfully.
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.744 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.767 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.774 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.779 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.779 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.780 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.780 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.781 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.781 185195 DEBUG nova.virt.libvirt.driver [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.821 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.864 185195 INFO nova.compute.manager [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Took 7.90 seconds to spawn the instance on the hypervisor.
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.864 185195 DEBUG nova.compute.manager [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.947 185195 INFO nova.compute.manager [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Took 8.63 seconds to build instance.
Jan 27 15:40:01 compute-0 nova_compute[185191]: 2026-01-27 15:40:01.967 185195 DEBUG oslo_concurrency.lockutils [None req-76302d23-f681-4c0d-aaaa-2a75bce43ef1 a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:02 compute-0 nova_compute[185191]: 2026-01-27 15:40:02.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.224 185195 DEBUG nova.compute.manager [req-65029de5-5810-4716-97e1-1b83b49c2101 req-b74de43c-4cb9-4558-90b9-5685dfa76079 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Received event network-vif-plugged-813e4105-c4d2-422b-930b-0f60d675471e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.225 185195 DEBUG oslo_concurrency.lockutils [req-65029de5-5810-4716-97e1-1b83b49c2101 req-b74de43c-4cb9-4558-90b9-5685dfa76079 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.225 185195 DEBUG oslo_concurrency.lockutils [req-65029de5-5810-4716-97e1-1b83b49c2101 req-b74de43c-4cb9-4558-90b9-5685dfa76079 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.226 185195 DEBUG oslo_concurrency.lockutils [req-65029de5-5810-4716-97e1-1b83b49c2101 req-b74de43c-4cb9-4558-90b9-5685dfa76079 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.226 185195 DEBUG nova.compute.manager [req-65029de5-5810-4716-97e1-1b83b49c2101 req-b74de43c-4cb9-4558-90b9-5685dfa76079 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] No waiting events found dispatching network-vif-plugged-813e4105-c4d2-422b-930b-0f60d675471e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.227 185195 WARNING nova.compute.manager [req-65029de5-5810-4716-97e1-1b83b49c2101 req-b74de43c-4cb9-4558-90b9-5685dfa76079 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Received unexpected event network-vif-plugged-813e4105-c4d2-422b-930b-0f60d675471e for instance with vm_state active and task_state None.
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.581 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Acquiring lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.582 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.601 185195 DEBUG nova.compute.manager [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.760 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.761 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.768 185195 DEBUG nova.virt.hardware [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.768 185195 INFO nova.compute.claims [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Claim successful on node compute-0.ctlplane.example.com
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.934 185195 DEBUG nova.compute.provider_tree [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.951 185195 DEBUG nova.scheduler.client.report [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.968 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.976 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:04 compute-0 nova_compute[185191]: 2026-01-27 15:40:04.976 185195 DEBUG nova.compute.manager [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.031 185195 DEBUG nova.compute.manager [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.032 185195 DEBUG nova.network.neutron [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.057 185195 INFO nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.083 185195 DEBUG nova.compute.manager [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.211 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.215 185195 DEBUG nova.compute.manager [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.216 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.216 185195 INFO nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Creating image(s)
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.217 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Acquiring lock "/var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.217 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "/var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.218 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "/var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.230 185195 DEBUG oslo_concurrency.processutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:40:05 compute-0 NetworkManager[56090]: <info>  [1769528405.2374] manager: (patch-provnet-4c936f88-1f36-4f8e-9e54-5a1dee016d65-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Jan 27 15:40:05 compute-0 NetworkManager[56090]: <info>  [1769528405.2405] manager: (patch-br-int-to-provnet-4c936f88-1f36-4f8e-9e54-5a1dee016d65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.289 185195 DEBUG oslo_concurrency.processutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.291 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Acquiring lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.291 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.304 185195 DEBUG oslo_concurrency.processutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.365 185195 DEBUG oslo_concurrency.processutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.366 185195 DEBUG oslo_concurrency.processutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024,backing_fmt=raw /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:40:05 compute-0 ovn_controller[97541]: 2026-01-27T15:40:05Z|00147|binding|INFO|Releasing lport 7867416a-c1b6-4934-a0ce-b1255fa030c3 from this chassis (sb_readonly=0)
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.384 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.392 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.409 185195 DEBUG oslo_concurrency.processutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024,backing_fmt=raw /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk 1073741824" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.411 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "6df7a80195f5d103caacf3cbc37baa39fe6fd024" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.412 185195 DEBUG oslo_concurrency.processutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.471 185195 DEBUG oslo_concurrency.processutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.473 185195 DEBUG nova.virt.disk.api [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Checking if we can resize image /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.474 185195 DEBUG oslo_concurrency.processutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.534 185195 DEBUG oslo_concurrency.processutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.535 185195 DEBUG nova.virt.disk.api [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Cannot resize image /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.536 185195 DEBUG nova.objects.instance [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lazy-loading 'migration_context' on Instance uuid 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.556 185195 DEBUG nova.policy [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '71aaddfe2e5a440da3af8d89984705b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7f0146e24567428baacde411c6d73bda', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.560 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.561 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Ensure instance console log exists: /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.562 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.562 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.563 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.580 185195 DEBUG nova.compute.manager [req-a0c877bf-c68e-44e9-b2bc-52989650d42d req-a8a31064-8ce3-42bd-9769-545e7dee00ec 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Received event network-changed-813e4105-c4d2-422b-930b-0f60d675471e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.581 185195 DEBUG nova.compute.manager [req-a0c877bf-c68e-44e9-b2bc-52989650d42d req-a8a31064-8ce3-42bd-9769-545e7dee00ec 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Refreshing instance network info cache due to event network-changed-813e4105-c4d2-422b-930b-0f60d675471e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.582 185195 DEBUG oslo_concurrency.lockutils [req-a0c877bf-c68e-44e9-b2bc-52989650d42d req-a8a31064-8ce3-42bd-9769-545e7dee00ec 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-c6a8ebba-2d8f-4d9c-b173-65a0b035bf25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.582 185195 DEBUG oslo_concurrency.lockutils [req-a0c877bf-c68e-44e9-b2bc-52989650d42d req-a8a31064-8ce3-42bd-9769-545e7dee00ec 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-c6a8ebba-2d8f-4d9c-b173-65a0b035bf25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:40:05 compute-0 nova_compute[185191]: 2026-01-27 15:40:05.582 185195 DEBUG nova.network.neutron [req-a0c877bf-c68e-44e9-b2bc-52989650d42d req-a8a31064-8ce3-42bd-9769-545e7dee00ec 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Refreshing network info cache for port 813e4105-c4d2-422b-930b-0f60d675471e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:40:06 compute-0 nova_compute[185191]: 2026-01-27 15:40:06.061 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:07 compute-0 nova_compute[185191]: 2026-01-27 15:40:07.199 185195 DEBUG nova.network.neutron [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Successfully created port: 7a46b87d-2beb-4cc1-bbcd-9213aff26623 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 15:40:07 compute-0 nova_compute[185191]: 2026-01-27 15:40:07.699 185195 DEBUG nova.network.neutron [req-a0c877bf-c68e-44e9-b2bc-52989650d42d req-a8a31064-8ce3-42bd-9769-545e7dee00ec 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Updated VIF entry in instance network info cache for port 813e4105-c4d2-422b-930b-0f60d675471e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:40:07 compute-0 nova_compute[185191]: 2026-01-27 15:40:07.700 185195 DEBUG nova.network.neutron [req-a0c877bf-c68e-44e9-b2bc-52989650d42d req-a8a31064-8ce3-42bd-9769-545e7dee00ec 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Updating instance_info_cache with network_info: [{"id": "813e4105-c4d2-422b-930b-0f60d675471e", "address": "fa:16:3e:95:fd:e5", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap813e4105-c4", "ovs_interfaceid": "813e4105-c4d2-422b-930b-0f60d675471e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:40:07 compute-0 nova_compute[185191]: 2026-01-27 15:40:07.731 185195 DEBUG oslo_concurrency.lockutils [req-a0c877bf-c68e-44e9-b2bc-52989650d42d req-a8a31064-8ce3-42bd-9769-545e7dee00ec 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-c6a8ebba-2d8f-4d9c-b173-65a0b035bf25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:40:08 compute-0 nova_compute[185191]: 2026-01-27 15:40:08.476 185195 DEBUG nova.network.neutron [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Successfully updated port: 7a46b87d-2beb-4cc1-bbcd-9213aff26623 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 15:40:08 compute-0 nova_compute[185191]: 2026-01-27 15:40:08.494 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Acquiring lock "refresh_cache-45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:40:08 compute-0 nova_compute[185191]: 2026-01-27 15:40:08.494 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Acquired lock "refresh_cache-45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:40:08 compute-0 nova_compute[185191]: 2026-01-27 15:40:08.495 185195 DEBUG nova.network.neutron [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:40:08 compute-0 nova_compute[185191]: 2026-01-27 15:40:08.678 185195 DEBUG nova.compute.manager [req-63a366c9-011b-48e9-be9f-f78eea168bd0 req-4d011496-ceb8-450d-ba32-0a2f8a41bfd8 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Received event network-changed-7a46b87d-2beb-4cc1-bbcd-9213aff26623 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:40:08 compute-0 nova_compute[185191]: 2026-01-27 15:40:08.678 185195 DEBUG nova.compute.manager [req-63a366c9-011b-48e9-be9f-f78eea168bd0 req-4d011496-ceb8-450d-ba32-0a2f8a41bfd8 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Refreshing instance network info cache due to event network-changed-7a46b87d-2beb-4cc1-bbcd-9213aff26623. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:40:08 compute-0 nova_compute[185191]: 2026-01-27 15:40:08.679 185195 DEBUG oslo_concurrency.lockutils [req-63a366c9-011b-48e9-be9f-f78eea168bd0 req-4d011496-ceb8-450d-ba32-0a2f8a41bfd8 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:40:08 compute-0 nova_compute[185191]: 2026-01-27 15:40:08.818 185195 DEBUG nova.network.neutron [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.610 185195 DEBUG nova.network.neutron [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Updating instance_info_cache with network_info: [{"id": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "address": "fa:16:3e:a6:86:bd", "network": {"id": "a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1439787436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f0146e24567428baacde411c6d73bda", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a46b87d-2b", "ovs_interfaceid": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.632 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Releasing lock "refresh_cache-45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.633 185195 DEBUG nova.compute.manager [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Instance network_info: |[{"id": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "address": "fa:16:3e:a6:86:bd", "network": {"id": "a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1439787436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f0146e24567428baacde411c6d73bda", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a46b87d-2b", "ovs_interfaceid": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.634 185195 DEBUG oslo_concurrency.lockutils [req-63a366c9-011b-48e9-be9f-f78eea168bd0 req-4d011496-ceb8-450d-ba32-0a2f8a41bfd8 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.635 185195 DEBUG nova.network.neutron [req-63a366c9-011b-48e9-be9f-f78eea168bd0 req-4d011496-ceb8-450d-ba32-0a2f8a41bfd8 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Refreshing network info cache for port 7a46b87d-2beb-4cc1-bbcd-9213aff26623 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.637 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Start _get_guest_xml network_info=[{"id": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "address": "fa:16:3e:a6:86:bd", "network": {"id": "a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1439787436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f0146e24567428baacde411c6d73bda", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a46b87d-2b", "ovs_interfaceid": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:34:19Z,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:34:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': 'fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.645 185195 WARNING nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.651 185195 DEBUG nova.virt.libvirt.host [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.652 185195 DEBUG nova.virt.libvirt.host [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.661 185195 DEBUG nova.virt.libvirt.host [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.662 185195 DEBUG nova.virt.libvirt.host [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.663 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.663 185195 DEBUG nova.virt.hardware [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T15:34:18Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='aed09843-3292-40b2-b829-c4ed118e135f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:34:19Z,direct_url=<?>,disk_format='qcow2',id=fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd88ca4062da4fb9bedb3a0002a43c12',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:34:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.664 185195 DEBUG nova.virt.hardware [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.664 185195 DEBUG nova.virt.hardware [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.665 185195 DEBUG nova.virt.hardware [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.665 185195 DEBUG nova.virt.hardware [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.666 185195 DEBUG nova.virt.hardware [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.666 185195 DEBUG nova.virt.hardware [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.667 185195 DEBUG nova.virt.hardware [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.667 185195 DEBUG nova.virt.hardware [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.667 185195 DEBUG nova.virt.hardware [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.668 185195 DEBUG nova.virt.hardware [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.671 185195 DEBUG nova.virt.libvirt.vif [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:40:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1744154143',display_name='tempest-TestServerBasicOps-server-1744154143',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1744154143',id=13,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLeVuIdD1e2Iw5Jkg66oTKxWb47jyBHgE+MD+LICXxzi+CMtDZ/MvSe64UyPW2JMugzBTLHCKk8WD0Ib00Bo8evnO5aNxmlmBTNmihqRAk6IX5fKUiD9YgMUM/5FL+g4KQ==',key_name='tempest-TestServerBasicOps-469378869',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f0146e24567428baacde411c6d73bda',ramdisk_id='',reservation_id='r-vyu8k007',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-235373023',owner_user_name='tempest-TestServerBasicOps-235373023-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:40:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71aaddfe2e5a440da3af8d89984705b9',uuid=45d73e6a-cef2-413e-88e0-7e4bcd6dad4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "address": "fa:16:3e:a6:86:bd", "network": {"id": "a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1439787436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f0146e24567428baacde411c6d73bda", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a46b87d-2b", "ovs_interfaceid": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.672 185195 DEBUG nova.network.os_vif_util [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Converting VIF {"id": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "address": "fa:16:3e:a6:86:bd", "network": {"id": "a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1439787436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f0146e24567428baacde411c6d73bda", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a46b87d-2b", "ovs_interfaceid": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.673 185195 DEBUG nova.network.os_vif_util [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:86:bd,bridge_name='br-int',has_traffic_filtering=True,id=7a46b87d-2beb-4cc1-bbcd-9213aff26623,network=Network(a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a46b87d-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.674 185195 DEBUG nova.objects.instance [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lazy-loading 'pci_devices' on Instance uuid 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.688 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] End _get_guest_xml xml=<domain type="kvm">
Jan 27 15:40:09 compute-0 nova_compute[185191]:   <uuid>45d73e6a-cef2-413e-88e0-7e4bcd6dad4e</uuid>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   <name>instance-0000000d</name>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   <memory>131072</memory>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   <vcpu>1</vcpu>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   <metadata>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <nova:name>tempest-TestServerBasicOps-server-1744154143</nova:name>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <nova:creationTime>2026-01-27 15:40:09</nova:creationTime>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <nova:flavor name="m1.nano">
Jan 27 15:40:09 compute-0 nova_compute[185191]:         <nova:memory>128</nova:memory>
Jan 27 15:40:09 compute-0 nova_compute[185191]:         <nova:disk>1</nova:disk>
Jan 27 15:40:09 compute-0 nova_compute[185191]:         <nova:swap>0</nova:swap>
Jan 27 15:40:09 compute-0 nova_compute[185191]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 15:40:09 compute-0 nova_compute[185191]:         <nova:vcpus>1</nova:vcpus>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       </nova:flavor>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <nova:owner>
Jan 27 15:40:09 compute-0 nova_compute[185191]:         <nova:user uuid="71aaddfe2e5a440da3af8d89984705b9">tempest-TestServerBasicOps-235373023-project-member</nova:user>
Jan 27 15:40:09 compute-0 nova_compute[185191]:         <nova:project uuid="7f0146e24567428baacde411c6d73bda">tempest-TestServerBasicOps-235373023</nova:project>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       </nova:owner>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <nova:root type="image" uuid="fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <nova:ports>
Jan 27 15:40:09 compute-0 nova_compute[185191]:         <nova:port uuid="7a46b87d-2beb-4cc1-bbcd-9213aff26623">
Jan 27 15:40:09 compute-0 nova_compute[185191]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:         </nova:port>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       </nova:ports>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     </nova:instance>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   </metadata>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   <sysinfo type="smbios">
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <system>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <entry name="manufacturer">RDO</entry>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <entry name="product">OpenStack Compute</entry>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <entry name="serial">45d73e6a-cef2-413e-88e0-7e4bcd6dad4e</entry>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <entry name="uuid">45d73e6a-cef2-413e-88e0-7e4bcd6dad4e</entry>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <entry name="family">Virtual Machine</entry>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     </system>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   </sysinfo>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   <os>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <boot dev="hd"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <smbios mode="sysinfo"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   </os>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   <features>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <acpi/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <apic/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <vmcoreinfo/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   </features>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   <clock offset="utc">
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <timer name="hpet" present="no"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   </clock>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   <cpu mode="host-model" match="exact">
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <target dev="vda" bus="virtio"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <disk type="file" device="cdrom">
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.config"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <target dev="sda" bus="sata"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <interface type="ethernet">
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <mac address="fa:16:3e:a6:86:bd"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <mtu size="1442"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <target dev="tap7a46b87d-2b"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <serial type="pty">
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <log file="/var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/console.log" append="off"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     </serial>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <video>
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     </video>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <input type="tablet" bus="usb"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <rng model="virtio">
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <backend model="random">/dev/urandom</backend>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <controller type="usb" index="0"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     <memballoon model="virtio">
Jan 27 15:40:09 compute-0 nova_compute[185191]:       <stats period="10"/>
Jan 27 15:40:09 compute-0 nova_compute[185191]:     </memballoon>
Jan 27 15:40:09 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:40:09 compute-0 nova_compute[185191]: </domain>
Jan 27 15:40:09 compute-0 nova_compute[185191]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.697 185195 DEBUG nova.compute.manager [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Preparing to wait for external event network-vif-plugged-7a46b87d-2beb-4cc1-bbcd-9213aff26623 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.697 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Acquiring lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.697 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.697 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.698 185195 DEBUG nova.virt.libvirt.vif [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:40:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1744154143',display_name='tempest-TestServerBasicOps-server-1744154143',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1744154143',id=13,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLeVuIdD1e2Iw5Jkg66oTKxWb47jyBHgE+MD+LICXxzi+CMtDZ/MvSe64UyPW2JMugzBTLHCKk8WD0Ib00Bo8evnO5aNxmlmBTNmihqRAk6IX5fKUiD9YgMUM/5FL+g4KQ==',key_name='tempest-TestServerBasicOps-469378869',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f0146e24567428baacde411c6d73bda',ramdisk_id='',reservation_id='r-vyu8k007',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-235373023',owner_user_name='tempest-TestServerBasicOps-235373023-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:40:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71aaddfe2e5a440da3af8d89984705b9',uuid=45d73e6a-cef2-413e-88e0-7e4bcd6dad4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "address": "fa:16:3e:a6:86:bd", "network": {"id": "a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1439787436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f0146e24567428baacde411c6d73bda", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a46b87d-2b", "ovs_interfaceid": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.698 185195 DEBUG nova.network.os_vif_util [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Converting VIF {"id": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "address": "fa:16:3e:a6:86:bd", "network": {"id": "a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1439787436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f0146e24567428baacde411c6d73bda", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a46b87d-2b", "ovs_interfaceid": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.699 185195 DEBUG nova.network.os_vif_util [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:86:bd,bridge_name='br-int',has_traffic_filtering=True,id=7a46b87d-2beb-4cc1-bbcd-9213aff26623,network=Network(a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a46b87d-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.699 185195 DEBUG os_vif [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:86:bd,bridge_name='br-int',has_traffic_filtering=True,id=7a46b87d-2beb-4cc1-bbcd-9213aff26623,network=Network(a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a46b87d-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.701 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.702 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.702 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.706 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.706 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a46b87d-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.707 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7a46b87d-2b, col_values=(('external_ids', {'iface-id': '7a46b87d-2beb-4cc1-bbcd-9213aff26623', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:86:bd', 'vm-uuid': '45d73e6a-cef2-413e-88e0-7e4bcd6dad4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:40:09 compute-0 NetworkManager[56090]: <info>  [1769528409.7100] manager: (tap7a46b87d-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.711 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.717 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.717 185195 INFO os_vif [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:86:bd,bridge_name='br-int',has_traffic_filtering=True,id=7a46b87d-2beb-4cc1-bbcd-9213aff26623,network=Network(a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a46b87d-2b')
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.778 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.779 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.779 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] No VIF found with MAC fa:16:3e:a6:86:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 15:40:09 compute-0 nova_compute[185191]: 2026-01-27 15:40:09.780 185195 INFO nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Using config drive
Jan 27 15:40:10 compute-0 nova_compute[185191]: 2026-01-27 15:40:10.312 185195 INFO nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Creating config drive at /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.config
Jan 27 15:40:10 compute-0 nova_compute[185191]: 2026-01-27 15:40:10.318 185195 DEBUG oslo_concurrency.processutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptxnrbe71 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:40:10 compute-0 nova_compute[185191]: 2026-01-27 15:40:10.448 185195 DEBUG oslo_concurrency.processutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptxnrbe71" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:40:10 compute-0 kernel: tap7a46b87d-2b: entered promiscuous mode
Jan 27 15:40:10 compute-0 ovn_controller[97541]: 2026-01-27T15:40:10Z|00148|binding|INFO|Claiming lport 7a46b87d-2beb-4cc1-bbcd-9213aff26623 for this chassis.
Jan 27 15:40:10 compute-0 ovn_controller[97541]: 2026-01-27T15:40:10Z|00149|binding|INFO|7a46b87d-2beb-4cc1-bbcd-9213aff26623: Claiming fa:16:3e:a6:86:bd 10.100.0.9
Jan 27 15:40:10 compute-0 NetworkManager[56090]: <info>  [1769528410.5244] manager: (tap7a46b87d-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Jan 27 15:40:10 compute-0 nova_compute[185191]: 2026-01-27 15:40:10.528 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:10 compute-0 ovn_controller[97541]: 2026-01-27T15:40:10Z|00150|binding|INFO|Setting lport 7a46b87d-2beb-4cc1-bbcd-9213aff26623 ovn-installed in OVS
Jan 27 15:40:10 compute-0 nova_compute[185191]: 2026-01-27 15:40:10.545 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:10 compute-0 ovn_controller[97541]: 2026-01-27T15:40:10Z|00151|binding|INFO|Setting lport 7a46b87d-2beb-4cc1-bbcd-9213aff26623 up in Southbound
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.547 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:86:bd 10.100.0.9'], port_security=['fa:16:3e:a6:86:bd 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '45d73e6a-cef2-413e-88e0-7e4bcd6dad4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f0146e24567428baacde411c6d73bda', 'neutron:revision_number': '2', 'neutron:security_group_ids': '552023c9-a293-4b75-900a-b2b7c9e08ff8 d4b922d8-9caa-4721-973c-c12f4c90f96b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2993dea3-6392-4b20-8301-1899d7e33053, chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=7a46b87d-2beb-4cc1-bbcd-9213aff26623) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.548 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 7a46b87d-2beb-4cc1-bbcd-9213aff26623 in datapath a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0 bound to our chassis
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.550 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0
Jan 27 15:40:10 compute-0 nova_compute[185191]: 2026-01-27 15:40:10.555 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:10 compute-0 systemd-machined[156506]: New machine qemu-14-instance-0000000d.
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.565 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[45cfd9ab-30a0-4b13-8baf-4d1992e5d406]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.566 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa3ba0879-a1 in ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.568 238613 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa3ba0879-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.568 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ccefc258-6778-447b-9c58-091f12eb0cf0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.569 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[26a746e5-0aa4-401f-9c45-f30c19b64f6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.581 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8bd132-bcd3-43ed-afe9-61f01125fa77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:10 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.597 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6ff636-9fed-4f1a-9982-2cfdf2274830]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:10 compute-0 systemd-udevd[252355]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:40:10 compute-0 NetworkManager[56090]: <info>  [1769528410.6286] device (tap7a46b87d-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.630 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[8d5fdd2f-370e-45fd-b7c8-a2209563c6f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:10 compute-0 NetworkManager[56090]: <info>  [1769528410.6333] device (tap7a46b87d-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.638 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d15263-4bc6-405b-ad46-8845bc6bc827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:10 compute-0 NetworkManager[56090]: <info>  [1769528410.6403] manager: (tapa3ba0879-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/69)
Jan 27 15:40:10 compute-0 systemd-udevd[252359]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.673 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[1e753e33-9507-4744-959b-923b175cd1b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.676 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[96dee735-693a-4fd8-ae44-4f53ed01b418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:10 compute-0 NetworkManager[56090]: <info>  [1769528410.7020] device (tapa3ba0879-a0): carrier: link connected
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.706 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e1b24c-2239-4332-8a25-acee90a6fe0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.724 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7cf4dc-a32d-473a-b023-fb3c7b7e9a44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ba0879-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:41:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598643, 'reachable_time': 16073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252384, 'error': None, 'target': 'ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.743 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6551a7a3-6327-4d8f-a1ee-f517b89e0481]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaf:4125'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598643, 'tstamp': 598643}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252385, 'error': None, 'target': 'ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.759 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[68e2b682-a8ab-4ffc-a56c-4855fc6002b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ba0879-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:41:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598643, 'reachable_time': 16073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252386, 'error': None, 'target': 'ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.804 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef41f21-48a3-4a63-b909-101efb82403b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.880 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0c8518-afcd-4733-ad98-7bdfe04026e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.882 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ba0879-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.882 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.882 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3ba0879-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:40:10 compute-0 kernel: tapa3ba0879-a0: entered promiscuous mode
Jan 27 15:40:10 compute-0 NetworkManager[56090]: <info>  [1769528410.8895] manager: (tapa3ba0879-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.890 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3ba0879-a0, col_values=(('external_ids', {'iface-id': 'b688fc3e-30f9-4824-8b6b-522da7bd6079'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:40:10 compute-0 ovn_controller[97541]: 2026-01-27T15:40:10Z|00152|binding|INFO|Releasing lport b688fc3e-30f9-4824-8b6b-522da7bd6079 from this chassis (sb_readonly=0)
Jan 27 15:40:10 compute-0 nova_compute[185191]: 2026-01-27 15:40:10.893 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.905 106793 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 15:40:10 compute-0 nova_compute[185191]: 2026-01-27 15:40:10.905 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.907 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4052b879-57c9-4cda-bcf4-f48e7125f00d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.908 106793 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: global
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     log         /dev/log local0 debug
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     log-tag     haproxy-metadata-proxy-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     user        root
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     group       root
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     maxconn     1024
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     pidfile     /var/lib/neutron/external/pids/a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0.pid.haproxy
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     daemon
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: defaults
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     log global
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     mode http
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     option httplog
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     option dontlognull
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     option http-server-close
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     option forwardfor
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     retries                 3
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     timeout http-request    30s
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     timeout connect         30s
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     timeout client          32s
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     timeout server          32s
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     timeout http-keep-alive 30s
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: listen listener
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     bind 169.254.169.254:80
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:     http-request add-header X-OVN-Network-ID a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 15:40:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:10.908 106793 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0', 'env', 'PROCESS_TAG=haproxy-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 15:40:10 compute-0 nova_compute[185191]: 2026-01-27 15:40:10.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:40:10 compute-0 nova_compute[185191]: 2026-01-27 15:40:10.945 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:10 compute-0 nova_compute[185191]: 2026-01-27 15:40:10.945 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:10 compute-0 nova_compute[185191]: 2026-01-27 15:40:10.946 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:10 compute-0 nova_compute[185191]: 2026-01-27 15:40:10.947 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:10 compute-0 nova_compute[185191]: 2026-01-27 15:40:10.947 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:10 compute-0 nova_compute[185191]: 2026-01-27 15:40:10.948 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:10 compute-0 nova_compute[185191]: 2026-01-27 15:40:10.991 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.016 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.017 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Image id fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87 yields fingerprint 6df7a80195f5d103caacf3cbc37baa39fe6fd024 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.018 185195 INFO nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] image fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87 at (/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024): checking
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.019 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] image fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87 at (/var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.023 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.024 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] cb018734-6031-42f0-98a2-1cd3bfd95c69 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.024 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] cb018734-6031-42f0-98a2-1cd3bfd95c69 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.025 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.045 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528411.0235262, 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.046 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] VM Started (Lifecycle Event)
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.064 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.084 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.093 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528411.023691, 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.093 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] VM Paused (Lifecycle Event)
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.095 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.096 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance cb018734-6031-42f0-98a2-1cd3bfd95c69 is backed by 6df7a80195f5d103caacf3cbc37baa39fe6fd024 _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.096 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] c6a8ebba-2d8f-4d9c-b173-65a0b035bf25 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.097 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] c6a8ebba-2d8f-4d9c-b173-65a0b035bf25 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.097 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.116 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:40:11 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:11.119 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.120 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.124 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.154 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.170 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.171 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance c6a8ebba-2d8f-4d9c-b173-65a0b035bf25 is backed by 6df7a80195f5d103caacf3cbc37baa39fe6fd024 _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.171 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.172 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.172 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.204 185195 DEBUG nova.compute.manager [req-bfa4b598-d267-428c-9533-9377d198a5a1 req-321ee5a8-9b70-4574-8a1a-288f0998bf67 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Received event network-vif-plugged-7a46b87d-2beb-4cc1-bbcd-9213aff26623 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.205 185195 DEBUG oslo_concurrency.lockutils [req-bfa4b598-d267-428c-9533-9377d198a5a1 req-321ee5a8-9b70-4574-8a1a-288f0998bf67 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.205 185195 DEBUG oslo_concurrency.lockutils [req-bfa4b598-d267-428c-9533-9377d198a5a1 req-321ee5a8-9b70-4574-8a1a-288f0998bf67 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.205 185195 DEBUG oslo_concurrency.lockutils [req-bfa4b598-d267-428c-9533-9377d198a5a1 req-321ee5a8-9b70-4574-8a1a-288f0998bf67 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.206 185195 DEBUG nova.compute.manager [req-bfa4b598-d267-428c-9533-9377d198a5a1 req-321ee5a8-9b70-4574-8a1a-288f0998bf67 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Processing event network-vif-plugged-7a46b87d-2beb-4cc1-bbcd-9213aff26623 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.210 185195 DEBUG nova.compute.manager [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.216 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528411.2156434, 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.216 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] VM Resumed (Lifecycle Event)
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.218 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.222 185195 INFO nova.virt.libvirt.driver [-] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Instance spawned successfully.
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.223 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.236 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.246 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.249 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.250 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e is backed by 6df7a80195f5d103caacf3cbc37baa39fe6fd024 _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.250 185195 WARNING nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.250 185195 WARNING nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.250 185195 INFO nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Active base files: /var/lib/nova/instances/_base/6df7a80195f5d103caacf3cbc37baa39fe6fd024
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.250 185195 INFO nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Removable base files: /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9 /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.251 185195 INFO nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/29bcbb18dfbc63280f06b3fe1dcbacec35cfdfb9
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.251 185195 INFO nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/6cfa0c50405f22bddeb2f4c2b9e121870dd7feac
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.251 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.251 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.251 185195 DEBUG nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.251 185195 INFO nova.virt.libvirt.imagecache [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.256 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.257 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.257 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.258 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.258 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.259 185195 DEBUG nova.virt.libvirt.driver [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.268 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.317 185195 INFO nova.compute.manager [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Took 6.10 seconds to spawn the instance on the hypervisor.
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.318 185195 DEBUG nova.compute.manager [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:40:11 compute-0 podman[252434]: 2026-01-27 15:40:11.347085477 +0000 UTC m=+0.074967101 container create f111563b6b699d24983ce57151d2947c123da42bbbca2231ed8108d3b6424c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 15:40:11 compute-0 systemd[1]: Started libpod-conmon-f111563b6b699d24983ce57151d2947c123da42bbbca2231ed8108d3b6424c92.scope.
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.390 185195 INFO nova.compute.manager [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Took 6.66 seconds to build instance.
Jan 27 15:40:11 compute-0 podman[252434]: 2026-01-27 15:40:11.302079031 +0000 UTC m=+0.029960675 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.418 185195 DEBUG oslo_concurrency.lockutils [None req-488c442c-5443-447c-932f-6ff461068fa9 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:11 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e1e66d9fe220b2db104d99a10580215561922d495e022dc7f356c7f3a1648c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 15:40:11 compute-0 podman[252434]: 2026-01-27 15:40:11.448177108 +0000 UTC m=+0.176058752 container init f111563b6b699d24983ce57151d2947c123da42bbbca2231ed8108d3b6424c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:40:11 compute-0 podman[252434]: 2026-01-27 15:40:11.455996768 +0000 UTC m=+0.183878382 container start f111563b6b699d24983ce57151d2947c123da42bbbca2231ed8108d3b6424c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 15:40:11 compute-0 neutron-haproxy-ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0[252448]: [NOTICE]   (252452) : New worker (252455) forked
Jan 27 15:40:11 compute-0 neutron-haproxy-ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0[252448]: [NOTICE]   (252452) : Loading success.
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.484 185195 DEBUG nova.network.neutron [req-63a366c9-011b-48e9-be9f-f78eea168bd0 req-4d011496-ceb8-450d-ba32-0a2f8a41bfd8 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Updated VIF entry in instance network info cache for port 7a46b87d-2beb-4cc1-bbcd-9213aff26623. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.485 185195 DEBUG nova.network.neutron [req-63a366c9-011b-48e9-be9f-f78eea168bd0 req-4d011496-ceb8-450d-ba32-0a2f8a41bfd8 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Updating instance_info_cache with network_info: [{"id": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "address": "fa:16:3e:a6:86:bd", "network": {"id": "a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1439787436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f0146e24567428baacde411c6d73bda", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a46b87d-2b", "ovs_interfaceid": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:40:11 compute-0 nova_compute[185191]: 2026-01-27 15:40:11.508 185195 DEBUG oslo_concurrency.lockutils [req-63a366c9-011b-48e9-be9f-f78eea168bd0 req-4d011496-ceb8-450d-ba32-0a2f8a41bfd8 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:40:11 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:11.522 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:40:13 compute-0 nova_compute[185191]: 2026-01-27 15:40:13.315 185195 DEBUG nova.compute.manager [req-56a65acc-5cff-4a47-92bf-a61ae1949008 req-ff2d0e33-b857-4128-9f10-49d68ea77ccd 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Received event network-vif-plugged-7a46b87d-2beb-4cc1-bbcd-9213aff26623 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:40:13 compute-0 nova_compute[185191]: 2026-01-27 15:40:13.316 185195 DEBUG oslo_concurrency.lockutils [req-56a65acc-5cff-4a47-92bf-a61ae1949008 req-ff2d0e33-b857-4128-9f10-49d68ea77ccd 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:13 compute-0 nova_compute[185191]: 2026-01-27 15:40:13.316 185195 DEBUG oslo_concurrency.lockutils [req-56a65acc-5cff-4a47-92bf-a61ae1949008 req-ff2d0e33-b857-4128-9f10-49d68ea77ccd 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:13 compute-0 nova_compute[185191]: 2026-01-27 15:40:13.316 185195 DEBUG oslo_concurrency.lockutils [req-56a65acc-5cff-4a47-92bf-a61ae1949008 req-ff2d0e33-b857-4128-9f10-49d68ea77ccd 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:13 compute-0 nova_compute[185191]: 2026-01-27 15:40:13.316 185195 DEBUG nova.compute.manager [req-56a65acc-5cff-4a47-92bf-a61ae1949008 req-ff2d0e33-b857-4128-9f10-49d68ea77ccd 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] No waiting events found dispatching network-vif-plugged-7a46b87d-2beb-4cc1-bbcd-9213aff26623 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:40:13 compute-0 nova_compute[185191]: 2026-01-27 15:40:13.316 185195 WARNING nova.compute.manager [req-56a65acc-5cff-4a47-92bf-a61ae1949008 req-ff2d0e33-b857-4128-9f10-49d68ea77ccd 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Received unexpected event network-vif-plugged-7a46b87d-2beb-4cc1-bbcd-9213aff26623 for instance with vm_state active and task_state None.
Jan 27 15:40:13 compute-0 podman[252464]: 2026-01-27 15:40:13.332624824 +0000 UTC m=+0.084365163 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 15:40:14 compute-0 nova_compute[185191]: 2026-01-27 15:40:14.709 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:15 compute-0 podman[252487]: 2026-01-27 15:40:15.319578037 +0000 UTC m=+0.069488154 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6)
Jan 27 15:40:15 compute-0 podman[252485]: 2026-01-27 15:40:15.325994319 +0000 UTC m=+0.083650764 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:40:15 compute-0 podman[252486]: 2026-01-27 15:40:15.354676599 +0000 UTC m=+0.109078817 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 15:40:15 compute-0 nova_compute[185191]: 2026-01-27 15:40:15.549 185195 DEBUG nova.compute.manager [req-e577a9d8-bfe3-436e-aa46-84caff65e74a req-ce02ebea-6f8e-416b-a576-a9f0f5f19a6c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Received event network-changed-7a46b87d-2beb-4cc1-bbcd-9213aff26623 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:40:15 compute-0 nova_compute[185191]: 2026-01-27 15:40:15.549 185195 DEBUG nova.compute.manager [req-e577a9d8-bfe3-436e-aa46-84caff65e74a req-ce02ebea-6f8e-416b-a576-a9f0f5f19a6c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Refreshing instance network info cache due to event network-changed-7a46b87d-2beb-4cc1-bbcd-9213aff26623. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:40:15 compute-0 nova_compute[185191]: 2026-01-27 15:40:15.550 185195 DEBUG oslo_concurrency.lockutils [req-e577a9d8-bfe3-436e-aa46-84caff65e74a req-ce02ebea-6f8e-416b-a576-a9f0f5f19a6c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:40:15 compute-0 nova_compute[185191]: 2026-01-27 15:40:15.550 185195 DEBUG oslo_concurrency.lockutils [req-e577a9d8-bfe3-436e-aa46-84caff65e74a req-ce02ebea-6f8e-416b-a576-a9f0f5f19a6c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:40:15 compute-0 nova_compute[185191]: 2026-01-27 15:40:15.550 185195 DEBUG nova.network.neutron [req-e577a9d8-bfe3-436e-aa46-84caff65e74a req-ce02ebea-6f8e-416b-a576-a9f0f5f19a6c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Refreshing network info cache for port 7a46b87d-2beb-4cc1-bbcd-9213aff26623 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:40:16 compute-0 nova_compute[185191]: 2026-01-27 15:40:16.066 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:17 compute-0 nova_compute[185191]: 2026-01-27 15:40:17.141 185195 DEBUG nova.network.neutron [req-e577a9d8-bfe3-436e-aa46-84caff65e74a req-ce02ebea-6f8e-416b-a576-a9f0f5f19a6c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Updated VIF entry in instance network info cache for port 7a46b87d-2beb-4cc1-bbcd-9213aff26623. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:40:17 compute-0 nova_compute[185191]: 2026-01-27 15:40:17.141 185195 DEBUG nova.network.neutron [req-e577a9d8-bfe3-436e-aa46-84caff65e74a req-ce02ebea-6f8e-416b-a576-a9f0f5f19a6c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Updating instance_info_cache with network_info: [{"id": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "address": "fa:16:3e:a6:86:bd", "network": {"id": "a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1439787436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f0146e24567428baacde411c6d73bda", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a46b87d-2b", "ovs_interfaceid": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:40:17 compute-0 nova_compute[185191]: 2026-01-27 15:40:17.161 185195 DEBUG oslo_concurrency.lockutils [req-e577a9d8-bfe3-436e-aa46-84caff65e74a req-ce02ebea-6f8e-416b-a576-a9f0f5f19a6c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:40:19 compute-0 nova_compute[185191]: 2026-01-27 15:40:19.713 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:19 compute-0 nova_compute[185191]: 2026-01-27 15:40:19.809 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:21 compute-0 nova_compute[185191]: 2026-01-27 15:40:21.068 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:21 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:21.524 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:40:24 compute-0 podman[252564]: 2026-01-27 15:40:24.337294865 +0000 UTC m=+0.086869581 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:40:24 compute-0 nova_compute[185191]: 2026-01-27 15:40:24.715 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:26 compute-0 nova_compute[185191]: 2026-01-27 15:40:26.071 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:26 compute-0 nova_compute[185191]: 2026-01-27 15:40:26.675 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:28 compute-0 podman[252585]: 2026-01-27 15:40:28.349437408 +0000 UTC m=+0.097571108 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git, io.openshift.tags=base rhel9, config_id=kepler, com.redhat.component=ubi9-container, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.expose-services=)
Jan 27 15:40:28 compute-0 podman[252586]: 2026-01-27 15:40:28.363833994 +0000 UTC m=+0.109448897 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:40:29 compute-0 nova_compute[185191]: 2026-01-27 15:40:29.720 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:29 compute-0 podman[201073]: time="2026-01-27T15:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:40:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29739 "" "Go-http-client/1.1"
Jan 27 15:40:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4847 "" "Go-http-client/1.1"
Jan 27 15:40:31 compute-0 nova_compute[185191]: 2026-01-27 15:40:31.074 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:31 compute-0 podman[252627]: 2026-01-27 15:40:31.309012804 +0000 UTC m=+0.063443732 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:40:31 compute-0 openstack_network_exporter[204239]: ERROR   15:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:40:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:40:31 compute-0 openstack_network_exporter[204239]: ERROR   15:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:40:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:40:33 compute-0 nova_compute[185191]: 2026-01-27 15:40:33.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:40:33 compute-0 nova_compute[185191]: 2026-01-27 15:40:33.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 15:40:34 compute-0 nova_compute[185191]: 2026-01-27 15:40:34.723 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:36 compute-0 nova_compute[185191]: 2026-01-27 15:40:36.078 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:36 compute-0 ovn_controller[97541]: 2026-01-27T15:40:36Z|00153|binding|INFO|Releasing lport 7867416a-c1b6-4934-a0ce-b1255fa030c3 from this chassis (sb_readonly=0)
Jan 27 15:40:36 compute-0 ovn_controller[97541]: 2026-01-27T15:40:36Z|00154|binding|INFO|Releasing lport b688fc3e-30f9-4824-8b6b-522da7bd6079 from this chassis (sb_readonly=0)
Jan 27 15:40:36 compute-0 nova_compute[185191]: 2026-01-27 15:40:36.759 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:36 compute-0 nova_compute[185191]: 2026-01-27 15:40:36.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:40:38 compute-0 ovn_controller[97541]: 2026-01-27T15:40:38Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:95:fd:e5 10.100.0.13
Jan 27 15:40:38 compute-0 ovn_controller[97541]: 2026-01-27T15:40:38Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:95:fd:e5 10.100.0.13
Jan 27 15:40:38 compute-0 nova_compute[185191]: 2026-01-27 15:40:38.962 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:40:38 compute-0 nova_compute[185191]: 2026-01-27 15:40:38.996 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:38 compute-0 nova_compute[185191]: 2026-01-27 15:40:38.996 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:38 compute-0 nova_compute[185191]: 2026-01-27 15:40:38.997 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:38 compute-0 nova_compute[185191]: 2026-01-27 15:40:38.997 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.092 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.158 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.165 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.233 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.241 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.328 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.330 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.398 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.405 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.470 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.472 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.536 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.727 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.965 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.966 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4790MB free_disk=72.31977844238281GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.967 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:39 compute-0 nova_compute[185191]: 2026-01-27 15:40:39.967 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:40 compute-0 nova_compute[185191]: 2026-01-27 15:40:40.285 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance cb018734-6031-42f0-98a2-1cd3bfd95c69 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:40:40 compute-0 nova_compute[185191]: 2026-01-27 15:40:40.286 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance c6a8ebba-2d8f-4d9c-b173-65a0b035bf25 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:40:40 compute-0 nova_compute[185191]: 2026-01-27 15:40:40.286 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:40:40 compute-0 nova_compute[185191]: 2026-01-27 15:40:40.286 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:40:40 compute-0 nova_compute[185191]: 2026-01-27 15:40:40.287 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:40:40 compute-0 nova_compute[185191]: 2026-01-27 15:40:40.391 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing inventories for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 15:40:40 compute-0 nova_compute[185191]: 2026-01-27 15:40:40.468 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating ProviderTree inventory for provider dbf037fd-3291-487b-ae9c-69178dae2ebc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 15:40:40 compute-0 nova_compute[185191]: 2026-01-27 15:40:40.469 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 15:40:40 compute-0 nova_compute[185191]: 2026-01-27 15:40:40.491 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing aggregate associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 15:40:40 compute-0 nova_compute[185191]: 2026-01-27 15:40:40.514 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing trait associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, traits: HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_ACCELERATORS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 15:40:40 compute-0 nova_compute[185191]: 2026-01-27 15:40:40.653 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:40:40 compute-0 nova_compute[185191]: 2026-01-27 15:40:40.671 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:40:40 compute-0 nova_compute[185191]: 2026-01-27 15:40:40.710 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:40:40 compute-0 nova_compute[185191]: 2026-01-27 15:40:40.711 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:41 compute-0 nova_compute[185191]: 2026-01-27 15:40:41.080 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:44 compute-0 podman[252687]: 2026-01-27 15:40:44.34060838 +0000 UTC m=+0.097305411 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 15:40:44 compute-0 nova_compute[185191]: 2026-01-27 15:40:44.665 185195 INFO nova.compute.manager [None req-2c83cd29-f28c-40a2-a529-4f5208f5215d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Get console output
Jan 27 15:40:44 compute-0 nova_compute[185191]: 2026-01-27 15:40:44.677 238468 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 27 15:40:44 compute-0 nova_compute[185191]: 2026-01-27 15:40:44.731 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.236 185195 DEBUG oslo_concurrency.lockutils [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.237 185195 DEBUG oslo_concurrency.lockutils [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.237 185195 DEBUG oslo_concurrency.lockutils [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.238 185195 DEBUG oslo_concurrency.lockutils [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.238 185195 DEBUG oslo_concurrency.lockutils [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.239 185195 INFO nova.compute.manager [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Terminating instance
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.240 185195 DEBUG nova.compute.manager [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 15:40:45 compute-0 kernel: tap813e4105-c4 (unregistering): left promiscuous mode
Jan 27 15:40:45 compute-0 NetworkManager[56090]: <info>  [1769528445.2777] device (tap813e4105-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.288 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:45 compute-0 ovn_controller[97541]: 2026-01-27T15:40:45Z|00155|binding|INFO|Releasing lport 813e4105-c4d2-422b-930b-0f60d675471e from this chassis (sb_readonly=0)
Jan 27 15:40:45 compute-0 ovn_controller[97541]: 2026-01-27T15:40:45Z|00156|binding|INFO|Setting lport 813e4105-c4d2-422b-930b-0f60d675471e down in Southbound
Jan 27 15:40:45 compute-0 ovn_controller[97541]: 2026-01-27T15:40:45Z|00157|binding|INFO|Removing iface tap813e4105-c4 ovn-installed in OVS
Jan 27 15:40:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:45.295 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:fd:e5 10.100.0.13'], port_security=['fa:16:3e:95:fd:e5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c6a8ebba-2d8f-4d9c-b173-65a0b035bf25', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69348b1d-27dc-488f-b1c0-e5faaa154377', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff135d375334408199a41eb5e406fa31', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2f9ec478-f259-49a7-97fb-15e0561e7285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.204'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03063d12-1719-4bc3-90aa-20f60e1e1459, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=813e4105-c4d2-422b-930b-0f60d675471e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:40:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:45.296 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 813e4105-c4d2-422b-930b-0f60d675471e in datapath 69348b1d-27dc-488f-b1c0-e5faaa154377 unbound from our chassis
Jan 27 15:40:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:45.298 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69348b1d-27dc-488f-b1c0-e5faaa154377
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.301 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.307 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:45.323 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3ba531-267d-4e65-a1ae-7b6030299767]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:45 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Jan 27 15:40:45 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 37.716s CPU time.
Jan 27 15:40:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:45.353 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[13b7c6cf-9729-44a9-97a2-1b3604bcde09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:45.358 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0d2c87-2545-40c9-bf47-a2938bcaba36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:45 compute-0 systemd-machined[156506]: Machine qemu-13-instance-0000000c terminated.
Jan 27 15:40:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:45.407 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[80f66722-3566-40c3-b93e-65937618f486]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:45.427 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[868d20f6-6ea2-4a12-ae4c-b3572baa2f6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69348b1d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:98:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589677, 'reachable_time': 39322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252754, 'error': None, 'target': 'ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:45.444 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[30322a62-1396-4cd2-a66a-22054c700734]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap69348b1d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589688, 'tstamp': 589688}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252772, 'error': None, 'target': 'ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap69348b1d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589690, 'tstamp': 589690}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252772, 'error': None, 'target': 'ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:45.446 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69348b1d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.447 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.454 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:45.455 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69348b1d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:40:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:45.455 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:40:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:45.455 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69348b1d-20, col_values=(('external_ids', {'iface-id': '7867416a-c1b6-4934-a0ce-b1255fa030c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:40:45 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:45.456 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:40:45 compute-0 podman[252728]: 2026-01-27 15:40:45.460501722 +0000 UTC m=+0.080421177 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, name=ubi9-minimal)
Jan 27 15:40:45 compute-0 podman[252727]: 2026-01-27 15:40:45.469065702 +0000 UTC m=+0.091739851 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.475 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.480 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:45 compute-0 podman[252739]: 2026-01-27 15:40:45.497104884 +0000 UTC m=+0.104764071 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.526 185195 INFO nova.virt.libvirt.driver [-] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Instance destroyed successfully.
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.526 185195 DEBUG nova.objects.instance [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lazy-loading 'resources' on Instance uuid c6a8ebba-2d8f-4d9c-b173-65a0b035bf25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.655 185195 DEBUG nova.virt.libvirt.vif [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:39:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-701197020',display_name='tempest-TestNetworkBasicOps-server-701197020',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-701197020',id=12,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOUlp6JafqTPPTj9gctWgt13/hyzuI7pE69zFBWCEnpPttD7IljnNNGlfwUPZFbP4I4yrjIATGZU+V9QLjFjTq2Je/ZYeNB3z0rE4slEuZdtnPGg0CtDoVYmo/nZptAhlQ==',key_name='tempest-TestNetworkBasicOps-242086254',keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:40:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff135d375334408199a41eb5e406fa31',ramdisk_id='',reservation_id='r-mqf58h0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1734510166',owner_user_name='tempest-TestNetworkBasicOps-1734510166-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T15:40:01Z,user_data=None,user_id='a5debc8bd8b947ef8b11b0edb9d8624e',uuid=c6a8ebba-2d8f-4d9c-b173-65a0b035bf25,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "813e4105-c4d2-422b-930b-0f60d675471e", "address": "fa:16:3e:95:fd:e5", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap813e4105-c4", "ovs_interfaceid": "813e4105-c4d2-422b-930b-0f60d675471e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.656 185195 DEBUG nova.network.os_vif_util [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Converting VIF {"id": "813e4105-c4d2-422b-930b-0f60d675471e", "address": "fa:16:3e:95:fd:e5", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap813e4105-c4", "ovs_interfaceid": "813e4105-c4d2-422b-930b-0f60d675471e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.657 185195 DEBUG nova.network.os_vif_util [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:95:fd:e5,bridge_name='br-int',has_traffic_filtering=True,id=813e4105-c4d2-422b-930b-0f60d675471e,network=Network(69348b1d-27dc-488f-b1c0-e5faaa154377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap813e4105-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.658 185195 DEBUG os_vif [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:fd:e5,bridge_name='br-int',has_traffic_filtering=True,id=813e4105-c4d2-422b-930b-0f60d675471e,network=Network(69348b1d-27dc-488f-b1c0-e5faaa154377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap813e4105-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.661 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.662 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap813e4105-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.664 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.665 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.668 185195 INFO os_vif [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:fd:e5,bridge_name='br-int',has_traffic_filtering=True,id=813e4105-c4d2-422b-930b-0f60d675471e,network=Network(69348b1d-27dc-488f-b1c0-e5faaa154377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap813e4105-c4')
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.669 185195 INFO nova.virt.libvirt.driver [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Deleting instance files /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25_del
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.670 185195 INFO nova.virt.libvirt.driver [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Deletion of /var/lib/nova/instances/c6a8ebba-2d8f-4d9c-b173-65a0b035bf25_del complete
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.987 185195 INFO nova.compute.manager [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Took 0.75 seconds to destroy the instance on the hypervisor.
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.988 185195 DEBUG oslo.service.loopingcall [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.989 185195 DEBUG nova.compute.manager [-] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 15:40:45 compute-0 nova_compute[185191]: 2026-01-27 15:40:45.989 185195 DEBUG nova.network.neutron [-] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 15:40:46 compute-0 nova_compute[185191]: 2026-01-27 15:40:46.085 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:46 compute-0 nova_compute[185191]: 2026-01-27 15:40:46.268 185195 DEBUG nova.compute.manager [req-c5b195d6-15db-416d-9c39-da8cd0f8deda req-968bf483-8cbd-4fff-ae62-d4bf48906fae 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Received event network-vif-unplugged-813e4105-c4d2-422b-930b-0f60d675471e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:40:46 compute-0 nova_compute[185191]: 2026-01-27 15:40:46.269 185195 DEBUG oslo_concurrency.lockutils [req-c5b195d6-15db-416d-9c39-da8cd0f8deda req-968bf483-8cbd-4fff-ae62-d4bf48906fae 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:46 compute-0 nova_compute[185191]: 2026-01-27 15:40:46.271 185195 DEBUG oslo_concurrency.lockutils [req-c5b195d6-15db-416d-9c39-da8cd0f8deda req-968bf483-8cbd-4fff-ae62-d4bf48906fae 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:46 compute-0 nova_compute[185191]: 2026-01-27 15:40:46.273 185195 DEBUG oslo_concurrency.lockutils [req-c5b195d6-15db-416d-9c39-da8cd0f8deda req-968bf483-8cbd-4fff-ae62-d4bf48906fae 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:46 compute-0 nova_compute[185191]: 2026-01-27 15:40:46.275 185195 DEBUG nova.compute.manager [req-c5b195d6-15db-416d-9c39-da8cd0f8deda req-968bf483-8cbd-4fff-ae62-d4bf48906fae 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] No waiting events found dispatching network-vif-unplugged-813e4105-c4d2-422b-930b-0f60d675471e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:40:46 compute-0 nova_compute[185191]: 2026-01-27 15:40:46.277 185195 DEBUG nova.compute.manager [req-c5b195d6-15db-416d-9c39-da8cd0f8deda req-968bf483-8cbd-4fff-ae62-d4bf48906fae 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Received event network-vif-unplugged-813e4105-c4d2-422b-930b-0f60d675471e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 15:40:46 compute-0 ovn_controller[97541]: 2026-01-27T15:40:46Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a6:86:bd 10.100.0.9
Jan 27 15:40:46 compute-0 ovn_controller[97541]: 2026-01-27T15:40:46Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a6:86:bd 10.100.0.9
Jan 27 15:40:47 compute-0 nova_compute[185191]: 2026-01-27 15:40:47.312 185195 DEBUG nova.network.neutron [-] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:40:47 compute-0 nova_compute[185191]: 2026-01-27 15:40:47.342 185195 INFO nova.compute.manager [-] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Took 1.35 seconds to deallocate network for instance.
Jan 27 15:40:47 compute-0 nova_compute[185191]: 2026-01-27 15:40:47.391 185195 DEBUG oslo_concurrency.lockutils [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:47 compute-0 nova_compute[185191]: 2026-01-27 15:40:47.392 185195 DEBUG oslo_concurrency.lockutils [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:47 compute-0 nova_compute[185191]: 2026-01-27 15:40:47.487 185195 DEBUG nova.compute.provider_tree [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:40:47 compute-0 nova_compute[185191]: 2026-01-27 15:40:47.524 185195 DEBUG nova.scheduler.client.report [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:40:47 compute-0 nova_compute[185191]: 2026-01-27 15:40:47.552 185195 DEBUG oslo_concurrency.lockutils [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:47 compute-0 nova_compute[185191]: 2026-01-27 15:40:47.595 185195 INFO nova.scheduler.client.report [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Deleted allocations for instance c6a8ebba-2d8f-4d9c-b173-65a0b035bf25
Jan 27 15:40:47 compute-0 nova_compute[185191]: 2026-01-27 15:40:47.682 185195 DEBUG oslo_concurrency.lockutils [None req-db508564-6f25-409f-8a08-b141e582e45d a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:48 compute-0 nova_compute[185191]: 2026-01-27 15:40:48.384 185195 DEBUG nova.compute.manager [req-85bbe952-b692-4cf9-b2b4-9a2bda80d12b req-a004449a-5657-4d36-9e12-778241a378fb 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Received event network-vif-plugged-813e4105-c4d2-422b-930b-0f60d675471e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:40:48 compute-0 nova_compute[185191]: 2026-01-27 15:40:48.385 185195 DEBUG oslo_concurrency.lockutils [req-85bbe952-b692-4cf9-b2b4-9a2bda80d12b req-a004449a-5657-4d36-9e12-778241a378fb 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:48 compute-0 nova_compute[185191]: 2026-01-27 15:40:48.385 185195 DEBUG oslo_concurrency.lockutils [req-85bbe952-b692-4cf9-b2b4-9a2bda80d12b req-a004449a-5657-4d36-9e12-778241a378fb 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:48 compute-0 nova_compute[185191]: 2026-01-27 15:40:48.386 185195 DEBUG oslo_concurrency.lockutils [req-85bbe952-b692-4cf9-b2b4-9a2bda80d12b req-a004449a-5657-4d36-9e12-778241a378fb 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "c6a8ebba-2d8f-4d9c-b173-65a0b035bf25-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:48 compute-0 nova_compute[185191]: 2026-01-27 15:40:48.386 185195 DEBUG nova.compute.manager [req-85bbe952-b692-4cf9-b2b4-9a2bda80d12b req-a004449a-5657-4d36-9e12-778241a378fb 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] No waiting events found dispatching network-vif-plugged-813e4105-c4d2-422b-930b-0f60d675471e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:40:48 compute-0 nova_compute[185191]: 2026-01-27 15:40:48.387 185195 WARNING nova.compute.manager [req-85bbe952-b692-4cf9-b2b4-9a2bda80d12b req-a004449a-5657-4d36-9e12-778241a378fb 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Received unexpected event network-vif-plugged-813e4105-c4d2-422b-930b-0f60d675471e for instance with vm_state deleted and task_state None.
Jan 27 15:40:48 compute-0 nova_compute[185191]: 2026-01-27 15:40:48.387 185195 DEBUG nova.compute.manager [req-85bbe952-b692-4cf9-b2b4-9a2bda80d12b req-a004449a-5657-4d36-9e12-778241a378fb 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Received event network-vif-deleted-813e4105-c4d2-422b-930b-0f60d675471e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:40:49 compute-0 nova_compute[185191]: 2026-01-27 15:40:49.694 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:40:49 compute-0 nova_compute[185191]: 2026-01-27 15:40:49.694 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:40:49 compute-0 nova_compute[185191]: 2026-01-27 15:40:49.695 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:40:49 compute-0 nova_compute[185191]: 2026-01-27 15:40:49.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:40:49 compute-0 nova_compute[185191]: 2026-01-27 15:40:49.940 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.465 185195 DEBUG oslo_concurrency.lockutils [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "cb018734-6031-42f0-98a2-1cd3bfd95c69" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.467 185195 DEBUG oslo_concurrency.lockutils [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "cb018734-6031-42f0-98a2-1cd3bfd95c69" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.467 185195 DEBUG oslo_concurrency.lockutils [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.467 185195 DEBUG oslo_concurrency.lockutils [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.468 185195 DEBUG oslo_concurrency.lockutils [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.469 185195 INFO nova.compute.manager [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Terminating instance
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.470 185195 DEBUG nova.compute.manager [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 15:40:50 compute-0 kernel: tapb3766198-88 (unregistering): left promiscuous mode
Jan 27 15:40:50 compute-0 NetworkManager[56090]: <info>  [1769528450.5069] device (tapb3766198-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 15:40:50 compute-0 ovn_controller[97541]: 2026-01-27T15:40:50Z|00158|binding|INFO|Releasing lport b3766198-88ae-43c4-8f5d-53661a568cde from this chassis (sb_readonly=0)
Jan 27 15:40:50 compute-0 ovn_controller[97541]: 2026-01-27T15:40:50Z|00159|binding|INFO|Setting lport b3766198-88ae-43c4-8f5d-53661a568cde down in Southbound
Jan 27 15:40:50 compute-0 ovn_controller[97541]: 2026-01-27T15:40:50Z|00160|binding|INFO|Removing iface tapb3766198-88 ovn-installed in OVS
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.515 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.517 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:50.526 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:8b:a2 10.100.0.6'], port_security=['fa:16:3e:3a:8b:a2 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'cb018734-6031-42f0-98a2-1cd3bfd95c69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69348b1d-27dc-488f-b1c0-e5faaa154377', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff135d375334408199a41eb5e406fa31', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a7eea46a-2779-4c19-92df-561b56dcec78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03063d12-1719-4bc3-90aa-20f60e1e1459, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=b3766198-88ae-43c4-8f5d-53661a568cde) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:40:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:50.527 106793 INFO neutron.agent.ovn.metadata.agent [-] Port b3766198-88ae-43c4-8f5d-53661a568cde in datapath 69348b1d-27dc-488f-b1c0-e5faaa154377 unbound from our chassis
Jan 27 15:40:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:50.529 106793 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 69348b1d-27dc-488f-b1c0-e5faaa154377, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 15:40:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:50.529 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d3fb1b88-2e2a-4d49-89cd-dab5856addb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:50.530 106793 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377 namespace which is not needed anymore
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.533 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:50 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Jan 27 15:40:50 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Consumed 46.234s CPU time.
Jan 27 15:40:50 compute-0 systemd-machined[156506]: Machine qemu-12-instance-0000000b terminated.
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.665 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:50 compute-0 neutron-haproxy-ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377[251643]: [NOTICE]   (251647) : haproxy version is 2.8.14-c23fe91
Jan 27 15:40:50 compute-0 neutron-haproxy-ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377[251643]: [NOTICE]   (251647) : path to executable is /usr/sbin/haproxy
Jan 27 15:40:50 compute-0 neutron-haproxy-ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377[251643]: [WARNING]  (251647) : Exiting Master process...
Jan 27 15:40:50 compute-0 neutron-haproxy-ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377[251643]: [WARNING]  (251647) : Exiting Master process...
Jan 27 15:40:50 compute-0 neutron-haproxy-ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377[251643]: [ALERT]    (251647) : Current worker (251649) exited with code 143 (Terminated)
Jan 27 15:40:50 compute-0 neutron-haproxy-ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377[251643]: [WARNING]  (251647) : All workers exited. Exiting... (0)
Jan 27 15:40:50 compute-0 systemd[1]: libpod-595a663e79091eb43f4cb19334467281ce38df45c2daec879401c9a9fb56eba2.scope: Deactivated successfully.
Jan 27 15:40:50 compute-0 podman[252831]: 2026-01-27 15:40:50.702329041 +0000 UTC m=+0.066549366 container died 595a663e79091eb43f4cb19334467281ce38df45c2daec879401c9a9fb56eba2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.736 185195 INFO nova.virt.libvirt.driver [-] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Instance destroyed successfully.
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.737 185195 DEBUG nova.objects.instance [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lazy-loading 'resources' on Instance uuid cb018734-6031-42f0-98a2-1cd3bfd95c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:40:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-595a663e79091eb43f4cb19334467281ce38df45c2daec879401c9a9fb56eba2-userdata-shm.mount: Deactivated successfully.
Jan 27 15:40:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f002dd14444ced53e62ed905be355ea0173045f7272471e0ab876231ca258de9-merged.mount: Deactivated successfully.
Jan 27 15:40:50 compute-0 podman[252831]: 2026-01-27 15:40:50.754313105 +0000 UTC m=+0.118533440 container cleanup 595a663e79091eb43f4cb19334467281ce38df45c2daec879401c9a9fb56eba2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.754 185195 DEBUG nova.virt.libvirt.vif [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:38:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1100779126',display_name='tempest-TestNetworkBasicOps-server-1100779126',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1100779126',id=11,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHf8da76ICP1FE4SxDbt3YLW/bs/58jyYG47+B9oCgXw3XIrB9hFCTLCXEqtUY3LzA0WMyYL5qCR/vJiWNNnwJ3t2/4Ht1zYjhMss6JgqFnNVdGGTHrJ9AkX90eos/vFVg==',key_name='tempest-TestNetworkBasicOps-1771239932',keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:38:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff135d375334408199a41eb5e406fa31',ramdisk_id='',reservation_id='r-e1be0u24',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1734510166',owner_user_name='tempest-TestNetworkBasicOps-1734510166-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T15:38:46Z,user_data=None,user_id='a5debc8bd8b947ef8b11b0edb9d8624e',uuid=cb018734-6031-42f0-98a2-1cd3bfd95c69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3766198-88ae-43c4-8f5d-53661a568cde", "address": "fa:16:3e:3a:8b:a2", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3766198-88", "ovs_interfaceid": "b3766198-88ae-43c4-8f5d-53661a568cde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.755 185195 DEBUG nova.network.os_vif_util [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Converting VIF {"id": "b3766198-88ae-43c4-8f5d-53661a568cde", "address": "fa:16:3e:3a:8b:a2", "network": {"id": "69348b1d-27dc-488f-b1c0-e5faaa154377", "bridge": "br-int", "label": "tempest-network-smoke--1655557402", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff135d375334408199a41eb5e406fa31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3766198-88", "ovs_interfaceid": "b3766198-88ae-43c4-8f5d-53661a568cde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.756 185195 DEBUG nova.network.os_vif_util [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=b3766198-88ae-43c4-8f5d-53661a568cde,network=Network(69348b1d-27dc-488f-b1c0-e5faaa154377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3766198-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.756 185195 DEBUG os_vif [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=b3766198-88ae-43c4-8f5d-53661a568cde,network=Network(69348b1d-27dc-488f-b1c0-e5faaa154377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3766198-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.759 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.759 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3766198-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.761 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.763 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:40:50 compute-0 systemd[1]: libpod-conmon-595a663e79091eb43f4cb19334467281ce38df45c2daec879401c9a9fb56eba2.scope: Deactivated successfully.
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.767 185195 INFO os_vif [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=b3766198-88ae-43c4-8f5d-53661a568cde,network=Network(69348b1d-27dc-488f-b1c0-e5faaa154377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3766198-88')
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.768 185195 INFO nova.virt.libvirt.driver [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Deleting instance files /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69_del
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.768 185195 INFO nova.virt.libvirt.driver [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Deletion of /var/lib/nova/instances/cb018734-6031-42f0-98a2-1cd3bfd95c69_del complete
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.810 185195 DEBUG nova.compute.manager [req-0ded29c0-a180-4ae6-b31b-506a34b245c5 req-53d822aa-573a-4b61-a23b-bba1c1a9c18f 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Received event network-vif-unplugged-b3766198-88ae-43c4-8f5d-53661a568cde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.811 185195 DEBUG oslo_concurrency.lockutils [req-0ded29c0-a180-4ae6-b31b-506a34b245c5 req-53d822aa-573a-4b61-a23b-bba1c1a9c18f 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.811 185195 DEBUG oslo_concurrency.lockutils [req-0ded29c0-a180-4ae6-b31b-506a34b245c5 req-53d822aa-573a-4b61-a23b-bba1c1a9c18f 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.812 185195 DEBUG oslo_concurrency.lockutils [req-0ded29c0-a180-4ae6-b31b-506a34b245c5 req-53d822aa-573a-4b61-a23b-bba1c1a9c18f 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.812 185195 DEBUG nova.compute.manager [req-0ded29c0-a180-4ae6-b31b-506a34b245c5 req-53d822aa-573a-4b61-a23b-bba1c1a9c18f 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] No waiting events found dispatching network-vif-unplugged-b3766198-88ae-43c4-8f5d-53661a568cde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.813 185195 DEBUG nova.compute.manager [req-0ded29c0-a180-4ae6-b31b-506a34b245c5 req-53d822aa-573a-4b61-a23b-bba1c1a9c18f 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Received event network-vif-unplugged-b3766198-88ae-43c4-8f5d-53661a568cde for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.835 185195 INFO nova.compute.manager [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Took 0.37 seconds to destroy the instance on the hypervisor.
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.836 185195 DEBUG oslo.service.loopingcall [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.837 185195 DEBUG nova.compute.manager [-] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.837 185195 DEBUG nova.network.neutron [-] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 15:40:50 compute-0 podman[252875]: 2026-01-27 15:40:50.854362238 +0000 UTC m=+0.074336084 container remove 595a663e79091eb43f4cb19334467281ce38df45c2daec879401c9a9fb56eba2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 15:40:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:50.862 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9e441dac-8f09-4063-9c31-d18702fd7e8a]: (4, ('Tue Jan 27 03:40:50 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377 (595a663e79091eb43f4cb19334467281ce38df45c2daec879401c9a9fb56eba2)\n595a663e79091eb43f4cb19334467281ce38df45c2daec879401c9a9fb56eba2\nTue Jan 27 03:40:50 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377 (595a663e79091eb43f4cb19334467281ce38df45c2daec879401c9a9fb56eba2)\n595a663e79091eb43f4cb19334467281ce38df45c2daec879401c9a9fb56eba2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:50.863 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c35255ff-4efb-4a24-9b59-e4e23ed041e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:50.864 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69348b1d-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.866 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:50 compute-0 kernel: tap69348b1d-20: left promiscuous mode
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.869 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:50.872 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[093700db-8520-4427-b688-e05b94881e88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:50 compute-0 nova_compute[185191]: 2026-01-27 15:40:50.885 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:50.892 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d77568e2-fbdd-457f-9c74-127dc0f14df9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:50.894 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fad93908-37a6-4ee3-8677-fbcb9bc26457]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:50.910 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[11bf11c8-5041-4d80-baef-320da5a7543d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589669, 'reachable_time': 27127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252888, 'error': None, 'target': 'ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:50.913 107308 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-69348b1d-27dc-488f-b1c0-e5faaa154377 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 15:40:50 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:40:50.913 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[7330e22f-029a-423a-ae48-3d3530ec2a73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:40:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d69348b1d\x2d27dc\x2d488f\x2db1c0\x2de5faaa154377.mount: Deactivated successfully.
Jan 27 15:40:51 compute-0 nova_compute[185191]: 2026-01-27 15:40:51.086 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:51 compute-0 nova_compute[185191]: 2026-01-27 15:40:51.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:40:51 compute-0 nova_compute[185191]: 2026-01-27 15:40:51.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 15:40:51 compute-0 nova_compute[185191]: 2026-01-27 15:40:51.966 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 15:40:52 compute-0 nova_compute[185191]: 2026-01-27 15:40:52.149 185195 DEBUG nova.network.neutron [-] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:40:52 compute-0 nova_compute[185191]: 2026-01-27 15:40:52.179 185195 INFO nova.compute.manager [-] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Took 1.34 seconds to deallocate network for instance.
Jan 27 15:40:52 compute-0 nova_compute[185191]: 2026-01-27 15:40:52.225 185195 DEBUG oslo_concurrency.lockutils [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:52 compute-0 nova_compute[185191]: 2026-01-27 15:40:52.226 185195 DEBUG oslo_concurrency.lockutils [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:52 compute-0 nova_compute[185191]: 2026-01-27 15:40:52.308 185195 DEBUG nova.compute.provider_tree [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:40:52 compute-0 nova_compute[185191]: 2026-01-27 15:40:52.330 185195 DEBUG nova.scheduler.client.report [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:40:52 compute-0 nova_compute[185191]: 2026-01-27 15:40:52.366 185195 DEBUG oslo_concurrency.lockutils [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:52 compute-0 nova_compute[185191]: 2026-01-27 15:40:52.390 185195 INFO nova.scheduler.client.report [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Deleted allocations for instance cb018734-6031-42f0-98a2-1cd3bfd95c69
Jan 27 15:40:52 compute-0 nova_compute[185191]: 2026-01-27 15:40:52.473 185195 DEBUG oslo_concurrency.lockutils [None req-9a5cbe35-97c4-492e-b604-be8bb2ea5dbe a5debc8bd8b947ef8b11b0edb9d8624e ff135d375334408199a41eb5e406fa31 - - default default] Lock "cb018734-6031-42f0-98a2-1cd3bfd95c69" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:52 compute-0 nova_compute[185191]: 2026-01-27 15:40:52.915 185195 DEBUG nova.compute.manager [req-03c60310-94d3-4217-8377-a18073f9ff3b req-9173c174-e684-452d-92f8-14ae04fda538 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Received event network-vif-plugged-b3766198-88ae-43c4-8f5d-53661a568cde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:40:52 compute-0 nova_compute[185191]: 2026-01-27 15:40:52.915 185195 DEBUG oslo_concurrency.lockutils [req-03c60310-94d3-4217-8377-a18073f9ff3b req-9173c174-e684-452d-92f8-14ae04fda538 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:40:52 compute-0 nova_compute[185191]: 2026-01-27 15:40:52.915 185195 DEBUG oslo_concurrency.lockutils [req-03c60310-94d3-4217-8377-a18073f9ff3b req-9173c174-e684-452d-92f8-14ae04fda538 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:40:52 compute-0 nova_compute[185191]: 2026-01-27 15:40:52.915 185195 DEBUG oslo_concurrency.lockutils [req-03c60310-94d3-4217-8377-a18073f9ff3b req-9173c174-e684-452d-92f8-14ae04fda538 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "cb018734-6031-42f0-98a2-1cd3bfd95c69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:40:52 compute-0 nova_compute[185191]: 2026-01-27 15:40:52.916 185195 DEBUG nova.compute.manager [req-03c60310-94d3-4217-8377-a18073f9ff3b req-9173c174-e684-452d-92f8-14ae04fda538 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] No waiting events found dispatching network-vif-plugged-b3766198-88ae-43c4-8f5d-53661a568cde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:40:52 compute-0 nova_compute[185191]: 2026-01-27 15:40:52.916 185195 WARNING nova.compute.manager [req-03c60310-94d3-4217-8377-a18073f9ff3b req-9173c174-e684-452d-92f8-14ae04fda538 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Received unexpected event network-vif-plugged-b3766198-88ae-43c4-8f5d-53661a568cde for instance with vm_state deleted and task_state None.
Jan 27 15:40:52 compute-0 nova_compute[185191]: 2026-01-27 15:40:52.916 185195 DEBUG nova.compute.manager [req-03c60310-94d3-4217-8377-a18073f9ff3b req-9173c174-e684-452d-92f8-14ae04fda538 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Received event network-vif-deleted-b3766198-88ae-43c4-8f5d-53661a568cde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:40:54 compute-0 nova_compute[185191]: 2026-01-27 15:40:54.966 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:40:54 compute-0 nova_compute[185191]: 2026-01-27 15:40:54.967 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:40:54 compute-0 nova_compute[185191]: 2026-01-27 15:40:54.967 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:40:55 compute-0 nova_compute[185191]: 2026-01-27 15:40:55.252 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:40:55 compute-0 nova_compute[185191]: 2026-01-27 15:40:55.252 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:40:55 compute-0 nova_compute[185191]: 2026-01-27 15:40:55.252 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:40:55 compute-0 nova_compute[185191]: 2026-01-27 15:40:55.253 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:40:55 compute-0 podman[252889]: 2026-01-27 15:40:55.373017573 +0000 UTC m=+0.117237525 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 15:40:55 compute-0 nova_compute[185191]: 2026-01-27 15:40:55.764 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:56 compute-0 nova_compute[185191]: 2026-01-27 15:40:56.089 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:57 compute-0 nova_compute[185191]: 2026-01-27 15:40:57.190 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Updating instance_info_cache with network_info: [{"id": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "address": "fa:16:3e:a6:86:bd", "network": {"id": "a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1439787436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f0146e24567428baacde411c6d73bda", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a46b87d-2b", "ovs_interfaceid": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:40:57 compute-0 nova_compute[185191]: 2026-01-27 15:40:57.215 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:40:57 compute-0 nova_compute[185191]: 2026-01-27 15:40:57.215 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:40:57 compute-0 nova_compute[185191]: 2026-01-27 15:40:57.215 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:40:57 compute-0 nova_compute[185191]: 2026-01-27 15:40:57.216 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:40:57 compute-0 nova_compute[185191]: 2026-01-27 15:40:57.216 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:40:57 compute-0 ovn_controller[97541]: 2026-01-27T15:40:57Z|00161|binding|INFO|Releasing lport b688fc3e-30f9-4824-8b6b-522da7bd6079 from this chassis (sb_readonly=0)
Jan 27 15:40:57 compute-0 nova_compute[185191]: 2026-01-27 15:40:57.552 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:40:59 compute-0 podman[252908]: 2026-01-27 15:40:59.30101071 +0000 UTC m=+0.058734366 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:40:59 compute-0 podman[252907]: 2026-01-27 15:40:59.343845629 +0000 UTC m=+0.102714026 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.component=ubi9-container, config_id=kepler, distribution-scope=public, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., container_name=kepler)
Jan 27 15:40:59 compute-0 podman[201073]: time="2026-01-27T15:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:40:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28506 "" "Go-http-client/1.1"
Jan 27 15:40:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4389 "" "Go-http-client/1.1"
Jan 27 15:41:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:00.262 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:00.262 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:00.262 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:00 compute-0 nova_compute[185191]: 2026-01-27 15:41:00.524 185195 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769528445.522502, c6a8ebba-2d8f-4d9c-b173-65a0b035bf25 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:41:00 compute-0 nova_compute[185191]: 2026-01-27 15:41:00.524 185195 INFO nova.compute.manager [-] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] VM Stopped (Lifecycle Event)
Jan 27 15:41:00 compute-0 nova_compute[185191]: 2026-01-27 15:41:00.553 185195 DEBUG nova.compute.manager [None req-55474a4e-60f9-44d3-910c-9666e799f1d0 - - - - - -] [instance: c6a8ebba-2d8f-4d9c-b173-65a0b035bf25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:41:00 compute-0 nova_compute[185191]: 2026-01-27 15:41:00.768 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:01 compute-0 nova_compute[185191]: 2026-01-27 15:41:01.091 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:01 compute-0 openstack_network_exporter[204239]: ERROR   15:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:41:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:41:01 compute-0 openstack_network_exporter[204239]: ERROR   15:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:41:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:41:01 compute-0 podman[252951]: 2026-01-27 15:41:01.808962865 +0000 UTC m=+0.058887480 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:41:04 compute-0 nova_compute[185191]: 2026-01-27 15:41:04.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:41:05 compute-0 nova_compute[185191]: 2026-01-27 15:41:05.734 185195 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769528450.7321594, cb018734-6031-42f0-98a2-1cd3bfd95c69 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:41:05 compute-0 nova_compute[185191]: 2026-01-27 15:41:05.735 185195 INFO nova.compute.manager [-] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] VM Stopped (Lifecycle Event)
Jan 27 15:41:05 compute-0 nova_compute[185191]: 2026-01-27 15:41:05.760 185195 DEBUG nova.compute.manager [None req-cdcf3519-4d1a-452c-986f-2f3289b8be36 - - - - - -] [instance: cb018734-6031-42f0-98a2-1cd3bfd95c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:41:05 compute-0 nova_compute[185191]: 2026-01-27 15:41:05.772 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:06 compute-0 nova_compute[185191]: 2026-01-27 15:41:06.093 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:07 compute-0 sshd-session[252974]: Invalid user sol from 2.57.122.238 port 50222
Jan 27 15:41:07 compute-0 sshd-session[252974]: Connection closed by invalid user sol 2.57.122.238 port 50222 [preauth]
Jan 27 15:41:07 compute-0 nova_compute[185191]: 2026-01-27 15:41:07.946 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:10 compute-0 nova_compute[185191]: 2026-01-27 15:41:10.776 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.993 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.994 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:41:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:11.001 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 15:41:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:11.002 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82c957adbc17ae7d91b95e243ef95edcae050b803dbf40e883e7549d3d32b40a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 15:41:11 compute-0 nova_compute[185191]: 2026-01-27 15:41:11.097 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.090 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2083 Content-Type: application/json Date: Tue, 27 Jan 2026 15:41:11 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-4e451494-2479-4c13-8886-fb6ede3e6828 x-openstack-request-id: req-4e451494-2479-4c13-8886-fb6ede3e6828 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.090 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e", "name": "tempest-TestServerBasicOps-server-1744154143", "status": "ACTIVE", "tenant_id": "7f0146e24567428baacde411c6d73bda", "user_id": "71aaddfe2e5a440da3af8d89984705b9", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "5846588f24471a46813bf577a0aa9f1304835e8488cc9d3e31dfda78", "image": {"id": "fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87"}]}, "flavor": {"id": "aed09843-3292-40b2-b829-c4ed118e135f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/aed09843-3292-40b2-b829-c4ed118e135f"}]}, "created": "2026-01-27T15:40:03Z", "updated": "2026-01-27T15:40:11Z", "addresses": {"tempest-TestServerBasicOps-1439787436-network": [{"version": 4, "addr": "10.100.0.9", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a6:86:bd"}, {"version": 4, "addr": "192.168.122.231", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a6:86:bd"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-469378869", "OS-SRV-USG:launched_at": "2026-01-27T15:40:11.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1181404914"}, {"name": "tempest-securitygroup--836537615"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000d", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.090 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e used request id req-4e451494-2479-4c13-8886-fb6ede3e6828 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.091 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '45d73e6a-cef2-413e-88e0-7e4bcd6dad4e', 'name': 'tempest-TestServerBasicOps-server-1744154143', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '7f0146e24567428baacde411c6d73bda', 'user_id': '71aaddfe2e5a440da3af8d89984705b9', 'hostId': '5846588f24471a46813bf577a0aa9f1304835e8488cc9d3e31dfda78', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.091 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.091 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.092 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.092 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.093 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:41:12.092117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.138 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.write.latency volume: 3719035162 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.139 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.139 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.139 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.139 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.139 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.140 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.140 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.140 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.write.requests volume: 305 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.140 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.141 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.141 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.141 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.141 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.141 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.141 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.141 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:41:12.140084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:41:12.141806) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.155 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.155 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.156 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.156 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.156 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.156 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.157 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:41:12.156891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.160 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e / tap7a46b87d-2b inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.160 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.160 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.161 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.162 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.162 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.162 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.162 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.162 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:41:12.161362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.163 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.163 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.163 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.163 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.163 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.163 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:41:12.162524) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.164 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.164 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.164 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.164 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.164 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.164 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.164 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:41:12.163456) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:41:12.164422) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.187 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/cpu volume: 34580000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.188 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.188 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.188 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.189 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.189 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.189 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.190 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.190 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.190 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.190 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.190 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:41:12.188946) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.190 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.191 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.191 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.191 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/memory.usage volume: 46.68359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:41:12.190307) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.192 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.192 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.192 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.192 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.193 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/network.incoming.bytes volume: 1796 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:41:12.191547) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.193 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.193 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.193 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.193 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:41:12.192837) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.194 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.194 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.194 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-1744154143>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-1744154143>]
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T15:41:12.194055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.194 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.194 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.195 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.195 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.195 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.196 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.196 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:41:12.195163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.196 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.196 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:41:12.196420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.197 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.197 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.197 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.197 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.197 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.197 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.198 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.198 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.198 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.198 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.198 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.198 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.199 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.199 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.200 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.200 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.200 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.200 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.200 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.200 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:41:12.197542) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.201 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.201 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.201 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.202 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.read.bytes volume: 30513664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.202 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.201 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:41:12.198852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.202 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.202 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.203 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.203 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:41:12.200585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.203 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:41:12.201778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.203 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.203 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.204 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.204 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.204 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.204 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.204 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.read.latency volume: 1339788759 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:41:12.203173) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.204 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.read.latency volume: 73460792 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.205 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.205 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.205 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.205 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.206 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:41:12.204316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.205 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.206 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.206 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-1744154143>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-1744154143>]
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.206 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.206 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.206 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.206 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.206 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T15:41:12.205880) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.206 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.207 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.read.requests volume: 1094 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.207 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:41:12.206898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.207 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.207 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.208 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.208 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.208 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.208 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.208 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.208 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.208 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:41:12.208315) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.209 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.209 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.209 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.209 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.209 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.209 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.209 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.210 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:41:12.209737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.210 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.210 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.210 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.210 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.210 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.210 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.211 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.write.bytes volume: 72962048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.211 14 DEBUG ceilometer.compute.pollsters [-] 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:41:12.210909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.211 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:12 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:41:12.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:41:13 compute-0 nova_compute[185191]: 2026-01-27 15:41:13.087 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:13 compute-0 sshd-session[252977]: Invalid user sol from 45.148.10.240 port 40710
Jan 27 15:41:13 compute-0 sshd-session[252977]: Connection closed by invalid user sol 45.148.10.240 port 40710 [preauth]
Jan 27 15:41:14 compute-0 podman[252979]: 2026-01-27 15:41:14.740142189 +0000 UTC m=+0.056440774 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 15:41:15 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:15.183 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:41:15 compute-0 nova_compute[185191]: 2026-01-27 15:41:15.184 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:15 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:15.185 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:41:15 compute-0 nova_compute[185191]: 2026-01-27 15:41:15.779 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:16 compute-0 nova_compute[185191]: 2026-01-27 15:41:16.098 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:16 compute-0 podman[253001]: 2026-01-27 15:41:16.321147766 +0000 UTC m=+0.064865560 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, architecture=x86_64)
Jan 27 15:41:16 compute-0 podman[252999]: 2026-01-27 15:41:16.336981641 +0000 UTC m=+0.084196919 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0)
Jan 27 15:41:16 compute-0 podman[253000]: 2026-01-27 15:41:16.351688925 +0000 UTC m=+0.098125912 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 15:41:18 compute-0 nova_compute[185191]: 2026-01-27 15:41:18.635 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:18 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:18.713 107178 DEBUG eventlet.wsgi.server [-] (107178) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 27 15:41:18 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:18.715 107178 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Jan 27 15:41:18 compute-0 ovn_metadata_agent[106788]: Accept: */*
Jan 27 15:41:18 compute-0 ovn_metadata_agent[106788]: Connection: close
Jan 27 15:41:18 compute-0 ovn_metadata_agent[106788]: Content-Type: text/plain
Jan 27 15:41:18 compute-0 ovn_metadata_agent[106788]: Host: 169.254.169.254
Jan 27 15:41:18 compute-0 ovn_metadata_agent[106788]: User-Agent: curl/7.84.0
Jan 27 15:41:18 compute-0 ovn_metadata_agent[106788]: X-Forwarded-For: 10.100.0.9
Jan 27 15:41:18 compute-0 ovn_metadata_agent[106788]: X-Ovn-Network-Id: a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 27 15:41:20 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:20.260 107178 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 27 15:41:20 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:20.260 107178 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.5465109
Jan 27 15:41:20 compute-0 haproxy-metadata-proxy-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0[252455]: 10.100.0.9:37918 [27/Jan/2026:15:41:18.712] listener listener/metadata 0/0/0/1548/1548 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Jan 27 15:41:20 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:20.395 107178 DEBUG eventlet.wsgi.server [-] (107178) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 27 15:41:20 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:20.397 107178 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Jan 27 15:41:20 compute-0 ovn_metadata_agent[106788]: Accept: */*
Jan 27 15:41:20 compute-0 ovn_metadata_agent[106788]: Connection: close
Jan 27 15:41:20 compute-0 ovn_metadata_agent[106788]: Content-Length: 100
Jan 27 15:41:20 compute-0 ovn_metadata_agent[106788]: Content-Type: application/x-www-form-urlencoded
Jan 27 15:41:20 compute-0 ovn_metadata_agent[106788]: Host: 169.254.169.254
Jan 27 15:41:20 compute-0 ovn_metadata_agent[106788]: User-Agent: curl/7.84.0
Jan 27 15:41:20 compute-0 ovn_metadata_agent[106788]: X-Forwarded-For: 10.100.0.9
Jan 27 15:41:20 compute-0 ovn_metadata_agent[106788]: X-Ovn-Network-Id: a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0
Jan 27 15:41:20 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:41:20 compute-0 ovn_metadata_agent[106788]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 27 15:41:20 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:20.714 107178 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 27 15:41:20 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:20.715 107178 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.3179984
Jan 27 15:41:20 compute-0 haproxy-metadata-proxy-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0[252455]: 10.100.0.9:37928 [27/Jan/2026:15:41:20.394] listener listener/metadata 0/0/0/321/321 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Jan 27 15:41:20 compute-0 nova_compute[185191]: 2026-01-27 15:41:20.784 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:21 compute-0 nova_compute[185191]: 2026-01-27 15:41:21.101 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:22 compute-0 nova_compute[185191]: 2026-01-27 15:41:22.442 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.011 185195 DEBUG oslo_concurrency.lockutils [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Acquiring lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.012 185195 DEBUG oslo_concurrency.lockutils [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.012 185195 DEBUG oslo_concurrency.lockutils [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Acquiring lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.012 185195 DEBUG oslo_concurrency.lockutils [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.013 185195 DEBUG oslo_concurrency.lockutils [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.014 185195 INFO nova.compute.manager [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Terminating instance
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.015 185195 DEBUG nova.compute.manager [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 15:41:23 compute-0 kernel: tap7a46b87d-2b (unregistering): left promiscuous mode
Jan 27 15:41:23 compute-0 NetworkManager[56090]: <info>  [1769528483.0432] device (tap7a46b87d-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 15:41:23 compute-0 ovn_controller[97541]: 2026-01-27T15:41:23Z|00162|binding|INFO|Releasing lport 7a46b87d-2beb-4cc1-bbcd-9213aff26623 from this chassis (sb_readonly=0)
Jan 27 15:41:23 compute-0 ovn_controller[97541]: 2026-01-27T15:41:23Z|00163|binding|INFO|Setting lport 7a46b87d-2beb-4cc1-bbcd-9213aff26623 down in Southbound
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.055 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:23 compute-0 ovn_controller[97541]: 2026-01-27T15:41:23Z|00164|binding|INFO|Removing iface tap7a46b87d-2b ovn-installed in OVS
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.062 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:23 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:23.072 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:86:bd 10.100.0.9'], port_security=['fa:16:3e:a6:86:bd 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '45d73e6a-cef2-413e-88e0-7e4bcd6dad4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f0146e24567428baacde411c6d73bda', 'neutron:revision_number': '4', 'neutron:security_group_ids': '552023c9-a293-4b75-900a-b2b7c9e08ff8 d4b922d8-9caa-4721-973c-c12f4c90f96b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2993dea3-6392-4b20-8301-1899d7e33053, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=7a46b87d-2beb-4cc1-bbcd-9213aff26623) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:41:23 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:23.073 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 7a46b87d-2beb-4cc1-bbcd-9213aff26623 in datapath a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0 unbound from our chassis
Jan 27 15:41:23 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:23.074 106793 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 15:41:23 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:23.075 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[df74d77f-5d07-4ddc-a564-18033493fd09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:23 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:23.076 106793 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0 namespace which is not needed anymore
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.079 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:23 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Jan 27 15:41:23 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 41.235s CPU time.
Jan 27 15:41:23 compute-0 systemd-machined[156506]: Machine qemu-14-instance-0000000d terminated.
Jan 27 15:41:23 compute-0 neutron-haproxy-ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0[252448]: [NOTICE]   (252452) : haproxy version is 2.8.14-c23fe91
Jan 27 15:41:23 compute-0 neutron-haproxy-ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0[252448]: [NOTICE]   (252452) : path to executable is /usr/sbin/haproxy
Jan 27 15:41:23 compute-0 neutron-haproxy-ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0[252448]: [WARNING]  (252452) : Exiting Master process...
Jan 27 15:41:23 compute-0 neutron-haproxy-ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0[252448]: [WARNING]  (252452) : Exiting Master process...
Jan 27 15:41:23 compute-0 neutron-haproxy-ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0[252448]: [ALERT]    (252452) : Current worker (252455) exited with code 143 (Terminated)
Jan 27 15:41:23 compute-0 neutron-haproxy-ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0[252448]: [WARNING]  (252452) : All workers exited. Exiting... (0)
Jan 27 15:41:23 compute-0 systemd[1]: libpod-f111563b6b699d24983ce57151d2947c123da42bbbca2231ed8108d3b6424c92.scope: Deactivated successfully.
Jan 27 15:41:23 compute-0 podman[253087]: 2026-01-27 15:41:23.234945232 +0000 UTC m=+0.062333243 container died f111563b6b699d24983ce57151d2947c123da42bbbca2231ed8108d3b6424c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.239 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.245 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f111563b6b699d24983ce57151d2947c123da42bbbca2231ed8108d3b6424c92-userdata-shm.mount: Deactivated successfully.
Jan 27 15:41:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-87e1e66d9fe220b2db104d99a10580215561922d495e022dc7f356c7f3a1648c-merged.mount: Deactivated successfully.
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.289 185195 INFO nova.virt.libvirt.driver [-] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Instance destroyed successfully.
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.290 185195 DEBUG nova.objects.instance [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lazy-loading 'resources' on Instance uuid 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:41:23 compute-0 podman[253087]: 2026-01-27 15:41:23.296593895 +0000 UTC m=+0.123981906 container cleanup f111563b6b699d24983ce57151d2947c123da42bbbca2231ed8108d3b6424c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 15:41:23 compute-0 systemd[1]: libpod-conmon-f111563b6b699d24983ce57151d2947c123da42bbbca2231ed8108d3b6424c92.scope: Deactivated successfully.
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.311 185195 DEBUG nova.virt.libvirt.vif [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:40:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1744154143',display_name='tempest-TestServerBasicOps-server-1744154143',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1744154143',id=13,image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLeVuIdD1e2Iw5Jkg66oTKxWb47jyBHgE+MD+LICXxzi+CMtDZ/MvSe64UyPW2JMugzBTLHCKk8WD0Ib00Bo8evnO5aNxmlmBTNmihqRAk6IX5fKUiD9YgMUM/5FL+g4KQ==',key_name='tempest-TestServerBasicOps-469378869',keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:40:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7f0146e24567428baacde411c6d73bda',ramdisk_id='',reservation_id='r-vyu8k007',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='fd0c4e3b-2dbb-4e18-aff3-9a79cee03c87',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-235373023',owner_user_name='tempest-TestServerBasicOps-235373023-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T15:41:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71aaddfe2e5a440da3af8d89984705b9',uuid=45d73e6a-cef2-413e-88e0-7e4bcd6dad4e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "address": "fa:16:3e:a6:86:bd", "network": {"id": "a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1439787436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f0146e24567428baacde411c6d73bda", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a46b87d-2b", "ovs_interfaceid": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.312 185195 DEBUG nova.network.os_vif_util [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Converting VIF {"id": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "address": "fa:16:3e:a6:86:bd", "network": {"id": "a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1439787436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f0146e24567428baacde411c6d73bda", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a46b87d-2b", "ovs_interfaceid": "7a46b87d-2beb-4cc1-bbcd-9213aff26623", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.313 185195 DEBUG nova.network.os_vif_util [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a6:86:bd,bridge_name='br-int',has_traffic_filtering=True,id=7a46b87d-2beb-4cc1-bbcd-9213aff26623,network=Network(a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a46b87d-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.313 185195 DEBUG os_vif [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:86:bd,bridge_name='br-int',has_traffic_filtering=True,id=7a46b87d-2beb-4cc1-bbcd-9213aff26623,network=Network(a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a46b87d-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.315 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.315 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a46b87d-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.317 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.318 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.321 185195 INFO os_vif [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:86:bd,bridge_name='br-int',has_traffic_filtering=True,id=7a46b87d-2beb-4cc1-bbcd-9213aff26623,network=Network(a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a46b87d-2b')
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.322 185195 INFO nova.virt.libvirt.driver [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Deleting instance files /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e_del
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.323 185195 INFO nova.virt.libvirt.driver [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Deletion of /var/lib/nova/instances/45d73e6a-cef2-413e-88e0-7e4bcd6dad4e_del complete
Jan 27 15:41:23 compute-0 podman[253132]: 2026-01-27 15:41:23.371124934 +0000 UTC m=+0.049514859 container remove f111563b6b699d24983ce57151d2947c123da42bbbca2231ed8108d3b6424c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 15:41:23 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:23.378 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1011a075-189c-428c-af10-8f8164febcb6]: (4, ('Tue Jan 27 03:41:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0 (f111563b6b699d24983ce57151d2947c123da42bbbca2231ed8108d3b6424c92)\nf111563b6b699d24983ce57151d2947c123da42bbbca2231ed8108d3b6424c92\nTue Jan 27 03:41:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0 (f111563b6b699d24983ce57151d2947c123da42bbbca2231ed8108d3b6424c92)\nf111563b6b699d24983ce57151d2947c123da42bbbca2231ed8108d3b6424c92\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:23 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:23.380 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c7cceb82-4dac-49ed-b041-0d4901fa4ae9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:23 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:23.381 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ba0879-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.383 185195 INFO nova.compute.manager [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Took 0.37 seconds to destroy the instance on the hypervisor.
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.385 185195 DEBUG oslo.service.loopingcall [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.385 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:23 compute-0 kernel: tapa3ba0879-a0: left promiscuous mode
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.386 185195 DEBUG nova.compute.manager [-] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.387 185195 DEBUG nova.network.neutron [-] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.396 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:23 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:23.399 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a27a07c2-7073-4578-b946-7b099c537f91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:23 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:23.417 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[257215ea-be99-4ee0-baa4-ac01edc8fd06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:23 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:23.418 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[12c345c3-c032-48ad-9c87-a88dbdb67022]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.423 185195 DEBUG nova.compute.manager [req-9eab5d98-15d4-405a-920e-6dabfa82646d req-9a8ad197-52cd-48fd-abb5-87d03fdd68a8 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Received event network-vif-unplugged-7a46b87d-2beb-4cc1-bbcd-9213aff26623 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.423 185195 DEBUG oslo_concurrency.lockutils [req-9eab5d98-15d4-405a-920e-6dabfa82646d req-9a8ad197-52cd-48fd-abb5-87d03fdd68a8 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.424 185195 DEBUG oslo_concurrency.lockutils [req-9eab5d98-15d4-405a-920e-6dabfa82646d req-9a8ad197-52cd-48fd-abb5-87d03fdd68a8 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.424 185195 DEBUG oslo_concurrency.lockutils [req-9eab5d98-15d4-405a-920e-6dabfa82646d req-9a8ad197-52cd-48fd-abb5-87d03fdd68a8 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.425 185195 DEBUG nova.compute.manager [req-9eab5d98-15d4-405a-920e-6dabfa82646d req-9a8ad197-52cd-48fd-abb5-87d03fdd68a8 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] No waiting events found dispatching network-vif-unplugged-7a46b87d-2beb-4cc1-bbcd-9213aff26623 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:41:23 compute-0 nova_compute[185191]: 2026-01-27 15:41:23.425 185195 DEBUG nova.compute.manager [req-9eab5d98-15d4-405a-920e-6dabfa82646d req-9a8ad197-52cd-48fd-abb5-87d03fdd68a8 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Received event network-vif-unplugged-7a46b87d-2beb-4cc1-bbcd-9213aff26623 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 15:41:23 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:23.433 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c9a531-13eb-40b5-98ff-5e923305d944]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598635, 'reachable_time': 23006, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253147, 'error': None, 'target': 'ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:23 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:23.436 107308 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a3ba0879-a22f-4d0b-9f3e-4faa1f4caff0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 15:41:23 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:23.436 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[2fbbeae5-52db-4982-af9a-fc903ef27cd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:23 compute-0 systemd[1]: run-netns-ovnmeta\x2da3ba0879\x2da22f\x2d4d0b\x2d9f3e\x2d4faa1f4caff0.mount: Deactivated successfully.
Jan 27 15:41:24 compute-0 nova_compute[185191]: 2026-01-27 15:41:24.522 185195 DEBUG nova.network.neutron [-] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:41:24 compute-0 nova_compute[185191]: 2026-01-27 15:41:24.540 185195 INFO nova.compute.manager [-] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Took 1.15 seconds to deallocate network for instance.
Jan 27 15:41:24 compute-0 nova_compute[185191]: 2026-01-27 15:41:24.580 185195 DEBUG oslo_concurrency.lockutils [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:24 compute-0 nova_compute[185191]: 2026-01-27 15:41:24.580 185195 DEBUG oslo_concurrency.lockutils [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:24 compute-0 nova_compute[185191]: 2026-01-27 15:41:24.665 185195 DEBUG nova.compute.provider_tree [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:41:24 compute-0 nova_compute[185191]: 2026-01-27 15:41:24.682 185195 DEBUG nova.scheduler.client.report [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:41:24 compute-0 nova_compute[185191]: 2026-01-27 15:41:24.727 185195 DEBUG oslo_concurrency.lockutils [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:24 compute-0 nova_compute[185191]: 2026-01-27 15:41:24.769 185195 INFO nova.scheduler.client.report [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Deleted allocations for instance 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e
Jan 27 15:41:24 compute-0 nova_compute[185191]: 2026-01-27 15:41:24.864 185195 DEBUG oslo_concurrency.lockutils [None req-592dd03d-2b04-402a-bbdc-194bb00c3102 71aaddfe2e5a440da3af8d89984705b9 7f0146e24567428baacde411c6d73bda - - default default] Lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:25.188 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:41:25 compute-0 nova_compute[185191]: 2026-01-27 15:41:25.718 185195 DEBUG nova.compute.manager [req-35355a01-0e79-4760-ad5b-ace73aa3f989 req-8fb456ce-8e98-47c5-ae98-706a1255a001 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Received event network-vif-plugged-7a46b87d-2beb-4cc1-bbcd-9213aff26623 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:41:25 compute-0 nova_compute[185191]: 2026-01-27 15:41:25.719 185195 DEBUG oslo_concurrency.lockutils [req-35355a01-0e79-4760-ad5b-ace73aa3f989 req-8fb456ce-8e98-47c5-ae98-706a1255a001 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:25 compute-0 nova_compute[185191]: 2026-01-27 15:41:25.719 185195 DEBUG oslo_concurrency.lockutils [req-35355a01-0e79-4760-ad5b-ace73aa3f989 req-8fb456ce-8e98-47c5-ae98-706a1255a001 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:25 compute-0 nova_compute[185191]: 2026-01-27 15:41:25.719 185195 DEBUG oslo_concurrency.lockutils [req-35355a01-0e79-4760-ad5b-ace73aa3f989 req-8fb456ce-8e98-47c5-ae98-706a1255a001 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "45d73e6a-cef2-413e-88e0-7e4bcd6dad4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:25 compute-0 nova_compute[185191]: 2026-01-27 15:41:25.720 185195 DEBUG nova.compute.manager [req-35355a01-0e79-4760-ad5b-ace73aa3f989 req-8fb456ce-8e98-47c5-ae98-706a1255a001 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] No waiting events found dispatching network-vif-plugged-7a46b87d-2beb-4cc1-bbcd-9213aff26623 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:41:25 compute-0 nova_compute[185191]: 2026-01-27 15:41:25.720 185195 WARNING nova.compute.manager [req-35355a01-0e79-4760-ad5b-ace73aa3f989 req-8fb456ce-8e98-47c5-ae98-706a1255a001 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Received unexpected event network-vif-plugged-7a46b87d-2beb-4cc1-bbcd-9213aff26623 for instance with vm_state deleted and task_state None.
Jan 27 15:41:25 compute-0 nova_compute[185191]: 2026-01-27 15:41:25.720 185195 DEBUG nova.compute.manager [req-35355a01-0e79-4760-ad5b-ace73aa3f989 req-8fb456ce-8e98-47c5-ae98-706a1255a001 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Received event network-vif-deleted-7a46b87d-2beb-4cc1-bbcd-9213aff26623 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:41:26 compute-0 nova_compute[185191]: 2026-01-27 15:41:26.104 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:26 compute-0 podman[253148]: 2026-01-27 15:41:26.328603824 +0000 UTC m=+0.088806703 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true)
Jan 27 15:41:28 compute-0 nova_compute[185191]: 2026-01-27 15:41:28.319 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:29 compute-0 podman[201073]: time="2026-01-27T15:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:41:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:41:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3917 "" "Go-http-client/1.1"
Jan 27 15:41:30 compute-0 podman[253169]: 2026-01-27 15:41:30.301466093 +0000 UTC m=+0.056788844 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:41:30 compute-0 podman[253168]: 2026-01-27 15:41:30.327378298 +0000 UTC m=+0.082671908 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, version=9.4, name=ubi9, com.redhat.component=ubi9-container, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, build-date=2024-09-18T21:23:30, release-0.7.12=)
Jan 27 15:41:31 compute-0 nova_compute[185191]: 2026-01-27 15:41:31.107 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:31 compute-0 openstack_network_exporter[204239]: ERROR   15:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:41:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:41:31 compute-0 openstack_network_exporter[204239]: ERROR   15:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:41:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:41:31 compute-0 nova_compute[185191]: 2026-01-27 15:41:31.583 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:31 compute-0 nova_compute[185191]: 2026-01-27 15:41:31.808 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:32 compute-0 podman[253211]: 2026-01-27 15:41:32.319980924 +0000 UTC m=+0.080860150 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:41:33 compute-0 nova_compute[185191]: 2026-01-27 15:41:33.323 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.046 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "f8fa4ecf-1446-421b-893d-f2b34f89da54" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.047 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.070 185195 DEBUG nova.compute.manager [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.109 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.183 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.184 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.195 185195 DEBUG nova.virt.hardware [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.196 185195 INFO nova.compute.claims [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Claim successful on node compute-0.ctlplane.example.com
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.323 185195 DEBUG nova.compute.provider_tree [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.343 185195 DEBUG nova.scheduler.client.report [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.370 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.371 185195 DEBUG nova.compute.manager [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.429 185195 DEBUG nova.compute.manager [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.430 185195 DEBUG nova.network.neutron [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.453 185195 INFO nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.473 185195 DEBUG nova.compute.manager [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.578 185195 DEBUG nova.compute.manager [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.581 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.581 185195 INFO nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Creating image(s)
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.583 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "/var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.583 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "/var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.584 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "/var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.585 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "ff4242d69ca9d913650cdb12b85ccef3c5758f81" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.586 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "ff4242d69ca9d913650cdb12b85ccef3c5758f81" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.685 185195 DEBUG nova.policy [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2f735dc3417d4dc1830a1081fe9a604b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '20f0077bc9bd475ebff1667438d2013e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.740 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.763 185195 WARNING nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.763 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Triggering sync for uuid f8fa4ecf-1446-421b-893d-f2b34f89da54 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 15:41:36 compute-0 nova_compute[185191]: 2026-01-27 15:41:36.764 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "f8fa4ecf-1446-421b-893d-f2b34f89da54" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:38 compute-0 nova_compute[185191]: 2026-01-27 15:41:38.287 185195 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769528483.281299, 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:41:38 compute-0 nova_compute[185191]: 2026-01-27 15:41:38.288 185195 INFO nova.compute.manager [-] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] VM Stopped (Lifecycle Event)
Jan 27 15:41:38 compute-0 nova_compute[185191]: 2026-01-27 15:41:38.312 185195 DEBUG nova.compute.manager [None req-96a384d1-1462-41dc-9d51-a2de1bacae92 - - - - - -] [instance: 45d73e6a-cef2-413e-88e0-7e4bcd6dad4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:41:38 compute-0 nova_compute[185191]: 2026-01-27 15:41:38.326 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:38 compute-0 nova_compute[185191]: 2026-01-27 15:41:38.345 185195 DEBUG nova.network.neutron [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Successfully created port: 9a8c7659-ad95-4751-9633-f076227a89a5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 15:41:38 compute-0 nova_compute[185191]: 2026-01-27 15:41:38.660 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:41:38 compute-0 nova_compute[185191]: 2026-01-27 15:41:38.727 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81.part --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:41:38 compute-0 nova_compute[185191]: 2026-01-27 15:41:38.728 185195 DEBUG nova.virt.images [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] 9d30f498-7a22-4c96-a758-84b2da277162 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 27 15:41:38 compute-0 nova_compute[185191]: 2026-01-27 15:41:38.730 185195 DEBUG nova.privsep.utils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 27 15:41:38 compute-0 nova_compute[185191]: 2026-01-27 15:41:38.731 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81.part /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:41:38 compute-0 nova_compute[185191]: 2026-01-27 15:41:38.974 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81.part /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81.converted" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:41:38 compute-0 nova_compute[185191]: 2026-01-27 15:41:38.978 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.035 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81.converted --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.037 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "ff4242d69ca9d913650cdb12b85ccef3c5758f81" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.052 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.117 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.119 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "ff4242d69ca9d913650cdb12b85ccef3c5758f81" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.120 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "ff4242d69ca9d913650cdb12b85ccef3c5758f81" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.132 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.189 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.190 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81,backing_fmt=raw /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.237 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81,backing_fmt=raw /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.239 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "ff4242d69ca9d913650cdb12b85ccef3c5758f81" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.239 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.299 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.300 185195 DEBUG nova.virt.disk.api [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Checking if we can resize image /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.301 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.364 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.365 185195 DEBUG nova.virt.disk.api [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Cannot resize image /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.365 185195 DEBUG nova.objects.instance [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lazy-loading 'migration_context' on Instance uuid f8fa4ecf-1446-421b-893d-f2b34f89da54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.392 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.393 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Ensure instance console log exists: /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.393 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.394 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.394 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.496 185195 DEBUG nova.network.neutron [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Successfully updated port: 9a8c7659-ad95-4751-9633-f076227a89a5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.532 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.533 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquired lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.534 185195 DEBUG nova.network.neutron [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.720 185195 DEBUG nova.compute.manager [req-2e40ae0f-714f-4009-a7b2-32769df4e725 req-4e05a56e-1663-448a-a55b-4f61ea771502 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Received event network-changed-9a8c7659-ad95-4751-9633-f076227a89a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.720 185195 DEBUG nova.compute.manager [req-2e40ae0f-714f-4009-a7b2-32769df4e725 req-4e05a56e-1663-448a-a55b-4f61ea771502 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Refreshing instance network info cache due to event network-changed-9a8c7659-ad95-4751-9633-f076227a89a5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.720 185195 DEBUG oslo_concurrency.lockutils [req-2e40ae0f-714f-4009-a7b2-32769df4e725 req-4e05a56e-1663-448a-a55b-4f61ea771502 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:41:39 compute-0 nova_compute[185191]: 2026-01-27 15:41:39.832 185195 DEBUG nova.network.neutron [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 15:41:40 compute-0 nova_compute[185191]: 2026-01-27 15:41:40.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:41:40 compute-0 nova_compute[185191]: 2026-01-27 15:41:40.991 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:40 compute-0 nova_compute[185191]: 2026-01-27 15:41:40.991 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:40 compute-0 nova_compute[185191]: 2026-01-27 15:41:40.993 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:40 compute-0 nova_compute[185191]: 2026-01-27 15:41:40.993 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:41:41 compute-0 nova_compute[185191]: 2026-01-27 15:41:41.111 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:41 compute-0 nova_compute[185191]: 2026-01-27 15:41:41.314 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:41:41 compute-0 nova_compute[185191]: 2026-01-27 15:41:41.315 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5308MB free_disk=72.34267807006836GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:41:41 compute-0 nova_compute[185191]: 2026-01-27 15:41:41.315 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:41 compute-0 nova_compute[185191]: 2026-01-27 15:41:41.315 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:41 compute-0 nova_compute[185191]: 2026-01-27 15:41:41.429 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance f8fa4ecf-1446-421b-893d-f2b34f89da54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:41:41 compute-0 nova_compute[185191]: 2026-01-27 15:41:41.429 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:41:41 compute-0 nova_compute[185191]: 2026-01-27 15:41:41.429 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:41:41 compute-0 nova_compute[185191]: 2026-01-27 15:41:41.470 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:41:41 compute-0 nova_compute[185191]: 2026-01-27 15:41:41.491 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:41:41 compute-0 nova_compute[185191]: 2026-01-27 15:41:41.514 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:41:41 compute-0 nova_compute[185191]: 2026-01-27 15:41:41.515 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.004 185195 DEBUG nova.network.neutron [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updating instance_info_cache with network_info: [{"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.025 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Releasing lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.025 185195 DEBUG nova.compute.manager [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Instance network_info: |[{"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.026 185195 DEBUG oslo_concurrency.lockutils [req-2e40ae0f-714f-4009-a7b2-32769df4e725 req-4e05a56e-1663-448a-a55b-4f61ea771502 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.026 185195 DEBUG nova.network.neutron [req-2e40ae0f-714f-4009-a7b2-32769df4e725 req-4e05a56e-1663-448a-a55b-4f61ea771502 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Refreshing network info cache for port 9a8c7659-ad95-4751-9633-f076227a89a5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.029 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Start _get_guest_xml network_info=[{"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:41:26Z,direct_url=<?>,disk_format='qcow2',id=9d30f498-7a22-4c96-a758-84b2da277162,min_disk=0,min_ram=0,name='tempest-scenario-img--117615184',owner='20f0077bc9bd475ebff1667438d2013e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:41:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': '9d30f498-7a22-4c96-a758-84b2da277162'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.036 185195 WARNING nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.042 185195 DEBUG nova.virt.libvirt.host [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.042 185195 DEBUG nova.virt.libvirt.host [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.049 185195 DEBUG nova.virt.libvirt.host [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.049 185195 DEBUG nova.virt.libvirt.host [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.050 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.050 185195 DEBUG nova.virt.hardware [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T15:34:18Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='aed09843-3292-40b2-b829-c4ed118e135f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:41:26Z,direct_url=<?>,disk_format='qcow2',id=9d30f498-7a22-4c96-a758-84b2da277162,min_disk=0,min_ram=0,name='tempest-scenario-img--117615184',owner='20f0077bc9bd475ebff1667438d2013e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:41:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.050 185195 DEBUG nova.virt.hardware [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.050 185195 DEBUG nova.virt.hardware [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.051 185195 DEBUG nova.virt.hardware [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.051 185195 DEBUG nova.virt.hardware [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.051 185195 DEBUG nova.virt.hardware [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.051 185195 DEBUG nova.virt.hardware [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.051 185195 DEBUG nova.virt.hardware [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.052 185195 DEBUG nova.virt.hardware [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.052 185195 DEBUG nova.virt.hardware [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.052 185195 DEBUG nova.virt.hardware [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.055 185195 DEBUG nova.virt.libvirt.vif [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:41:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4',id=14,image_ref='9d30f498-7a22-4c96-a758-84b2da277162',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='b3308bb6-f54d-4153-86c0-fa8fa74a39af'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20f0077bc9bd475ebff1667438d2013e',ramdisk_id='',reservation_id='r-ez8uojz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9d30f498-7a22-4c96-a758-84b2da277162',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-349502190',owner_user_name='tempest-PrometheusGabbiTest-349502190-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:41:36Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='2f735dc3417d4dc1830a1081fe9a604b',uuid=f8fa4ecf-1446-421b-893d-f2b34f89da54,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.055 185195 DEBUG nova.network.os_vif_util [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Converting VIF {"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.056 185195 DEBUG nova.network.os_vif_util [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:9a:3f,bridge_name='br-int',has_traffic_filtering=True,id=9a8c7659-ad95-4751-9633-f076227a89a5,network=Network(583566c3-a7da-49ba-8c93-87be3496cb80),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a8c7659-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.057 185195 DEBUG nova.objects.instance [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lazy-loading 'pci_devices' on Instance uuid f8fa4ecf-1446-421b-893d-f2b34f89da54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.076 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] End _get_guest_xml xml=<domain type="kvm">
Jan 27 15:41:42 compute-0 nova_compute[185191]:   <uuid>f8fa4ecf-1446-421b-893d-f2b34f89da54</uuid>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   <name>instance-0000000e</name>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   <memory>131072</memory>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   <vcpu>1</vcpu>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   <metadata>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <nova:name>te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4</nova:name>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <nova:creationTime>2026-01-27 15:41:42</nova:creationTime>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <nova:flavor name="m1.nano">
Jan 27 15:41:42 compute-0 nova_compute[185191]:         <nova:memory>128</nova:memory>
Jan 27 15:41:42 compute-0 nova_compute[185191]:         <nova:disk>1</nova:disk>
Jan 27 15:41:42 compute-0 nova_compute[185191]:         <nova:swap>0</nova:swap>
Jan 27 15:41:42 compute-0 nova_compute[185191]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 15:41:42 compute-0 nova_compute[185191]:         <nova:vcpus>1</nova:vcpus>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       </nova:flavor>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <nova:owner>
Jan 27 15:41:42 compute-0 nova_compute[185191]:         <nova:user uuid="2f735dc3417d4dc1830a1081fe9a604b">tempest-PrometheusGabbiTest-349502190-project-member</nova:user>
Jan 27 15:41:42 compute-0 nova_compute[185191]:         <nova:project uuid="20f0077bc9bd475ebff1667438d2013e">tempest-PrometheusGabbiTest-349502190</nova:project>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       </nova:owner>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <nova:root type="image" uuid="9d30f498-7a22-4c96-a758-84b2da277162"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <nova:ports>
Jan 27 15:41:42 compute-0 nova_compute[185191]:         <nova:port uuid="9a8c7659-ad95-4751-9633-f076227a89a5">
Jan 27 15:41:42 compute-0 nova_compute[185191]:           <nova:ip type="fixed" address="10.100.1.182" ipVersion="4"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:         </nova:port>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       </nova:ports>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     </nova:instance>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   </metadata>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   <sysinfo type="smbios">
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <system>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <entry name="manufacturer">RDO</entry>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <entry name="product">OpenStack Compute</entry>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <entry name="serial">f8fa4ecf-1446-421b-893d-f2b34f89da54</entry>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <entry name="uuid">f8fa4ecf-1446-421b-893d-f2b34f89da54</entry>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <entry name="family">Virtual Machine</entry>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     </system>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   </sysinfo>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   <os>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <boot dev="hd"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <smbios mode="sysinfo"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   </os>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   <features>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <acpi/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <apic/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <vmcoreinfo/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   </features>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   <clock offset="utc">
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <timer name="hpet" present="no"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   </clock>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   <cpu mode="host-model" match="exact">
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <target dev="vda" bus="virtio"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <disk type="file" device="cdrom">
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.config"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <target dev="sda" bus="sata"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <interface type="ethernet">
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <mac address="fa:16:3e:9b:9a:3f"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <mtu size="1442"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <target dev="tap9a8c7659-ad"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <serial type="pty">
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <log file="/var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/console.log" append="off"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     </serial>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <video>
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     </video>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <input type="tablet" bus="usb"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <rng model="virtio">
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <backend model="random">/dev/urandom</backend>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <controller type="usb" index="0"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     <memballoon model="virtio">
Jan 27 15:41:42 compute-0 nova_compute[185191]:       <stats period="10"/>
Jan 27 15:41:42 compute-0 nova_compute[185191]:     </memballoon>
Jan 27 15:41:42 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:41:42 compute-0 nova_compute[185191]: </domain>
Jan 27 15:41:42 compute-0 nova_compute[185191]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.077 185195 DEBUG nova.compute.manager [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Preparing to wait for external event network-vif-plugged-9a8c7659-ad95-4751-9633-f076227a89a5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.077 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.078 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.078 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.079 185195 DEBUG nova.virt.libvirt.vif [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:41:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4',id=14,image_ref='9d30f498-7a22-4c96-a758-84b2da277162',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='b3308bb6-f54d-4153-86c0-fa8fa74a39af'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20f0077bc9bd475ebff1667438d2013e',ramdisk_id='',reservation_id='r-ez8uojz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9d30f498-7a22-4c96-a758-84b2da277162',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-349502190',owner_user_name='tempest-PrometheusGabbiTest-349502190-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:41:36Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='2f735dc3417d4dc1830a1081fe9a604b',uuid=f8fa4ecf-1446-421b-893d-f2b34f89da54,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.079 185195 DEBUG nova.network.os_vif_util [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Converting VIF {"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.080 185195 DEBUG nova.network.os_vif_util [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:9a:3f,bridge_name='br-int',has_traffic_filtering=True,id=9a8c7659-ad95-4751-9633-f076227a89a5,network=Network(583566c3-a7da-49ba-8c93-87be3496cb80),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a8c7659-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.080 185195 DEBUG os_vif [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:9a:3f,bridge_name='br-int',has_traffic_filtering=True,id=9a8c7659-ad95-4751-9633-f076227a89a5,network=Network(583566c3-a7da-49ba-8c93-87be3496cb80),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a8c7659-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.081 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.081 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.082 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.084 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.085 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a8c7659-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.085 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9a8c7659-ad, col_values=(('external_ids', {'iface-id': '9a8c7659-ad95-4751-9633-f076227a89a5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:9a:3f', 'vm-uuid': 'f8fa4ecf-1446-421b-893d-f2b34f89da54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.087 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:42 compute-0 NetworkManager[56090]: <info>  [1769528502.0886] manager: (tap9a8c7659-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.090 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.093 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.095 185195 INFO os_vif [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:9a:3f,bridge_name='br-int',has_traffic_filtering=True,id=9a8c7659-ad95-4751-9633-f076227a89a5,network=Network(583566c3-a7da-49ba-8c93-87be3496cb80),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a8c7659-ad')
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.151 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.152 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.152 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] No VIF found with MAC fa:16:3e:9b:9a:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 15:41:42 compute-0 nova_compute[185191]: 2026-01-27 15:41:42.153 185195 INFO nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Using config drive
Jan 27 15:41:43 compute-0 nova_compute[185191]: 2026-01-27 15:41:43.322 185195 INFO nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Creating config drive at /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.config
Jan 27 15:41:43 compute-0 nova_compute[185191]: 2026-01-27 15:41:43.329 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp17nca564 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:41:43 compute-0 nova_compute[185191]: 2026-01-27 15:41:43.456 185195 DEBUG oslo_concurrency.processutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp17nca564" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:41:43 compute-0 kernel: tap9a8c7659-ad: entered promiscuous mode
Jan 27 15:41:43 compute-0 NetworkManager[56090]: <info>  [1769528503.5177] manager: (tap9a8c7659-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Jan 27 15:41:43 compute-0 ovn_controller[97541]: 2026-01-27T15:41:43Z|00165|binding|INFO|Claiming lport 9a8c7659-ad95-4751-9633-f076227a89a5 for this chassis.
Jan 27 15:41:43 compute-0 ovn_controller[97541]: 2026-01-27T15:41:43Z|00166|binding|INFO|9a8c7659-ad95-4751-9633-f076227a89a5: Claiming fa:16:3e:9b:9a:3f 10.100.1.182
Jan 27 15:41:43 compute-0 nova_compute[185191]: 2026-01-27 15:41:43.519 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:43 compute-0 nova_compute[185191]: 2026-01-27 15:41:43.524 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.535 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:9a:3f 10.100.1.182'], port_security=['fa:16:3e:9b:9a:3f 10.100.1.182'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.182/16', 'neutron:device_id': 'f8fa4ecf-1446-421b-893d-f2b34f89da54', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-583566c3-a7da-49ba-8c93-87be3496cb80', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20f0077bc9bd475ebff1667438d2013e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0c775d39-0088-4183-837a-f310fb1cc533', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e677173-f8a0-4b87-8946-43d053c4a459, chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=9a8c7659-ad95-4751-9633-f076227a89a5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.536 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 9a8c7659-ad95-4751-9633-f076227a89a5 in datapath 583566c3-a7da-49ba-8c93-87be3496cb80 bound to our chassis
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.537 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 583566c3-a7da-49ba-8c93-87be3496cb80
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.547 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e8a45544-9d1b-47eb-a5c1-4f87d8c5eee4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.548 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap583566c3-a1 in ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 15:41:43 compute-0 systemd-udevd[253281]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.550 238613 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap583566c3-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.550 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6fba0843-5728-4864-b6c8-0835c49c919d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.551 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a15198ca-dd78-4977-a0ac-136136ac2f54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:43 compute-0 ovn_controller[97541]: 2026-01-27T15:41:43Z|00167|binding|INFO|Setting lport 9a8c7659-ad95-4751-9633-f076227a89a5 ovn-installed in OVS
Jan 27 15:41:43 compute-0 ovn_controller[97541]: 2026-01-27T15:41:43Z|00168|binding|INFO|Setting lport 9a8c7659-ad95-4751-9633-f076227a89a5 up in Southbound
Jan 27 15:41:43 compute-0 NetworkManager[56090]: <info>  [1769528503.5665] device (tap9a8c7659-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.563 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[ec1f3be8-e853-4238-9369-f823b2feb15c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:43 compute-0 nova_compute[185191]: 2026-01-27 15:41:43.564 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:43 compute-0 NetworkManager[56090]: <info>  [1769528503.5711] device (tap9a8c7659-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 15:41:43 compute-0 systemd-machined[156506]: New machine qemu-15-instance-0000000e.
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.580 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[633d2e02-06ff-456a-9cfd-80d1f5ae14d8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:43 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.612 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[166d1676-13e4-487a-bfd0-ff829f780bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:43 compute-0 systemd-udevd[253287]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.617 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[2d922529-7060-49a5-83e4-9e41b65c5a80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:43 compute-0 NetworkManager[56090]: <info>  [1769528503.6199] manager: (tap583566c3-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/73)
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.650 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[20dbdca7-977d-4cf8-b832-b9c2d4119163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.654 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[3a18adb2-18f3-4884-80cc-db9317857035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:43 compute-0 NetworkManager[56090]: <info>  [1769528503.6781] device (tap583566c3-a0): carrier: link connected
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.683 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[ac2681ab-824e-45a2-9afd-ad09d44aa22b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.699 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8ad6c4-db02-4392-bb67-93ecd275d731]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap583566c3-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:b6:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607940, 'reachable_time': 27338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253316, 'error': None, 'target': 'ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.714 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[31d53048-69a1-43e8-86e8-af4f01b72537]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:b632'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607940, 'tstamp': 607940}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253317, 'error': None, 'target': 'ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.730 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[a814d7b4-ba14-4794-9d9e-146c9260fc0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap583566c3-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:b6:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607940, 'reachable_time': 27338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253318, 'error': None, 'target': 'ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.762 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6d0d1775-3142-40cd-9187-ac42b0f6e482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.815 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[c3572931-f7ca-44e9-bbe5-ff2d087fa692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.817 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap583566c3-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.818 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.818 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap583566c3-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:41:43 compute-0 nova_compute[185191]: 2026-01-27 15:41:43.820 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:43 compute-0 kernel: tap583566c3-a0: entered promiscuous mode
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.823 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap583566c3-a0, col_values=(('external_ids', {'iface-id': '1a1e49d2-439b-4887-8a67-bfa43f528ce6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:41:43 compute-0 NetworkManager[56090]: <info>  [1769528503.8243] manager: (tap583566c3-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Jan 27 15:41:43 compute-0 nova_compute[185191]: 2026-01-27 15:41:43.827 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:43 compute-0 ovn_controller[97541]: 2026-01-27T15:41:43Z|00169|binding|INFO|Releasing lport 1a1e49d2-439b-4887-8a67-bfa43f528ce6 from this chassis (sb_readonly=0)
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.827 106793 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/583566c3-a7da-49ba-8c93-87be3496cb80.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/583566c3-a7da-49ba-8c93-87be3496cb80.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.838 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[9517028e-01b9-459b-b9e4-ee20b58d3e63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.839 106793 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: global
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     log         /dev/log local0 debug
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     log-tag     haproxy-metadata-proxy-583566c3-a7da-49ba-8c93-87be3496cb80
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     user        root
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     group       root
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     maxconn     1024
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     pidfile     /var/lib/neutron/external/pids/583566c3-a7da-49ba-8c93-87be3496cb80.pid.haproxy
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     daemon
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: defaults
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     log global
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     mode http
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     option httplog
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     option dontlognull
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     option http-server-close
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     option forwardfor
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     retries                 3
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     timeout http-request    30s
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     timeout connect         30s
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     timeout client          32s
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     timeout server          32s
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     timeout http-keep-alive 30s
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: listen listener
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     bind 169.254.169.254:80
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:     http-request add-header X-OVN-Network-ID 583566c3-a7da-49ba-8c93-87be3496cb80
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 15:41:43 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:41:43.841 106793 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80', 'env', 'PROCESS_TAG=haproxy-583566c3-a7da-49ba-8c93-87be3496cb80', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/583566c3-a7da-49ba-8c93-87be3496cb80.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 15:41:43 compute-0 nova_compute[185191]: 2026-01-27 15:41:43.840 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.045 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528504.0454366, f8fa4ecf-1446-421b-893d-f2b34f89da54 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.047 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] VM Started (Lifecycle Event)
Jan 27 15:41:44 compute-0 podman[253355]: 2026-01-27 15:41:44.245837348 +0000 UTC m=+0.061381677 container create bb7cb3a6e1b1b657c458eab2de440ede5023466c703aa0c79bca7234b6da4964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 27 15:41:44 compute-0 systemd[1]: Started libpod-conmon-bb7cb3a6e1b1b657c458eab2de440ede5023466c703aa0c79bca7234b6da4964.scope.
Jan 27 15:41:44 compute-0 podman[253355]: 2026-01-27 15:41:44.21344335 +0000 UTC m=+0.028987699 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 15:41:44 compute-0 systemd[1]: Started libcrun container.
Jan 27 15:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/311ae5e2c4d666edcd5b8091e064f610f650003f0cb61f01650c01a5d1365fe7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 15:41:44 compute-0 podman[253355]: 2026-01-27 15:41:44.346037215 +0000 UTC m=+0.161581564 container init bb7cb3a6e1b1b657c458eab2de440ede5023466c703aa0c79bca7234b6da4964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 15:41:44 compute-0 podman[253355]: 2026-01-27 15:41:44.352771216 +0000 UTC m=+0.168315545 container start bb7cb3a6e1b1b657c458eab2de440ede5023466c703aa0c79bca7234b6da4964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 27 15:41:44 compute-0 neutron-haproxy-ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80[253369]: [NOTICE]   (253373) : New worker (253375) forked
Jan 27 15:41:44 compute-0 neutron-haproxy-ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80[253369]: [NOTICE]   (253373) : Loading success.
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.445 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.454 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528504.0455806, f8fa4ecf-1446-421b-893d-f2b34f89da54 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.455 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] VM Paused (Lifecycle Event)
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.500 185195 DEBUG nova.compute.manager [req-f07c0b1e-3856-4cbe-89e6-a48ad621c4b6 req-3481a4b6-0472-496c-831e-03315ed0b05b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Received event network-vif-plugged-9a8c7659-ad95-4751-9633-f076227a89a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.501 185195 DEBUG oslo_concurrency.lockutils [req-f07c0b1e-3856-4cbe-89e6-a48ad621c4b6 req-3481a4b6-0472-496c-831e-03315ed0b05b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.502 185195 DEBUG oslo_concurrency.lockutils [req-f07c0b1e-3856-4cbe-89e6-a48ad621c4b6 req-3481a4b6-0472-496c-831e-03315ed0b05b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.502 185195 DEBUG oslo_concurrency.lockutils [req-f07c0b1e-3856-4cbe-89e6-a48ad621c4b6 req-3481a4b6-0472-496c-831e-03315ed0b05b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.503 185195 DEBUG nova.compute.manager [req-f07c0b1e-3856-4cbe-89e6-a48ad621c4b6 req-3481a4b6-0472-496c-831e-03315ed0b05b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Processing event network-vif-plugged-9a8c7659-ad95-4751-9633-f076227a89a5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.504 185195 DEBUG nova.compute.manager [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.511 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.516 185195 INFO nova.virt.libvirt.driver [-] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Instance spawned successfully.
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.517 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.569 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.577 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528504.5078506, f8fa4ecf-1446-421b-893d-f2b34f89da54 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.578 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] VM Resumed (Lifecycle Event)
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.640 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.643 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.644 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.644 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.645 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.646 185195 DEBUG nova.virt.libvirt.driver [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.664 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.676 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.734 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.776 185195 INFO nova.compute.manager [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Took 8.20 seconds to spawn the instance on the hypervisor.
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.776 185195 DEBUG nova.compute.manager [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.854 185195 INFO nova.compute.manager [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Took 8.72 seconds to build instance.
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.911 185195 DEBUG oslo_concurrency.lockutils [None req-b263cee1-21e5-4682-9bf2-a645e333c348 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.912 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 8.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.913 185195 INFO nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:41:44 compute-0 nova_compute[185191]: 2026-01-27 15:41:44.913 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:45 compute-0 podman[253384]: 2026-01-27 15:41:45.314385904 +0000 UTC m=+0.067504092 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 15:41:45 compute-0 nova_compute[185191]: 2026-01-27 15:41:45.462 185195 DEBUG nova.network.neutron [req-2e40ae0f-714f-4009-a7b2-32769df4e725 req-4e05a56e-1663-448a-a55b-4f61ea771502 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updated VIF entry in instance network info cache for port 9a8c7659-ad95-4751-9633-f076227a89a5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:41:45 compute-0 nova_compute[185191]: 2026-01-27 15:41:45.463 185195 DEBUG nova.network.neutron [req-2e40ae0f-714f-4009-a7b2-32769df4e725 req-4e05a56e-1663-448a-a55b-4f61ea771502 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updating instance_info_cache with network_info: [{"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:41:45 compute-0 nova_compute[185191]: 2026-01-27 15:41:45.484 185195 DEBUG oslo_concurrency.lockutils [req-2e40ae0f-714f-4009-a7b2-32769df4e725 req-4e05a56e-1663-448a-a55b-4f61ea771502 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:41:46 compute-0 nova_compute[185191]: 2026-01-27 15:41:46.113 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:46 compute-0 nova_compute[185191]: 2026-01-27 15:41:46.619 185195 DEBUG nova.compute.manager [req-412fcd74-99f4-45f3-831e-ad0a2c5bde3d req-68d22697-dee5-4a7d-91a3-69379608d32b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Received event network-vif-plugged-9a8c7659-ad95-4751-9633-f076227a89a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:41:46 compute-0 nova_compute[185191]: 2026-01-27 15:41:46.621 185195 DEBUG oslo_concurrency.lockutils [req-412fcd74-99f4-45f3-831e-ad0a2c5bde3d req-68d22697-dee5-4a7d-91a3-69379608d32b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:41:46 compute-0 nova_compute[185191]: 2026-01-27 15:41:46.622 185195 DEBUG oslo_concurrency.lockutils [req-412fcd74-99f4-45f3-831e-ad0a2c5bde3d req-68d22697-dee5-4a7d-91a3-69379608d32b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:41:46 compute-0 nova_compute[185191]: 2026-01-27 15:41:46.623 185195 DEBUG oslo_concurrency.lockutils [req-412fcd74-99f4-45f3-831e-ad0a2c5bde3d req-68d22697-dee5-4a7d-91a3-69379608d32b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:41:46 compute-0 nova_compute[185191]: 2026-01-27 15:41:46.624 185195 DEBUG nova.compute.manager [req-412fcd74-99f4-45f3-831e-ad0a2c5bde3d req-68d22697-dee5-4a7d-91a3-69379608d32b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] No waiting events found dispatching network-vif-plugged-9a8c7659-ad95-4751-9633-f076227a89a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:41:46 compute-0 nova_compute[185191]: 2026-01-27 15:41:46.625 185195 WARNING nova.compute.manager [req-412fcd74-99f4-45f3-831e-ad0a2c5bde3d req-68d22697-dee5-4a7d-91a3-69379608d32b 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Received unexpected event network-vif-plugged-9a8c7659-ad95-4751-9633-f076227a89a5 for instance with vm_state active and task_state None.
Jan 27 15:41:47 compute-0 nova_compute[185191]: 2026-01-27 15:41:47.089 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:47 compute-0 podman[253400]: 2026-01-27 15:41:47.316682079 +0000 UTC m=+0.074997982 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true)
Jan 27 15:41:47 compute-0 podman[253402]: 2026-01-27 15:41:47.34840893 +0000 UTC m=+0.093962201 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, container_name=openstack_network_exporter, release=1755695350, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Jan 27 15:41:47 compute-0 podman[253401]: 2026-01-27 15:41:47.379317459 +0000 UTC m=+0.132814893 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Jan 27 15:41:50 compute-0 nova_compute[185191]: 2026-01-27 15:41:50.514 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:41:50 compute-0 nova_compute[185191]: 2026-01-27 15:41:50.515 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:41:50 compute-0 nova_compute[185191]: 2026-01-27 15:41:50.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:41:50 compute-0 nova_compute[185191]: 2026-01-27 15:41:50.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:41:51 compute-0 nova_compute[185191]: 2026-01-27 15:41:51.115 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:52 compute-0 nova_compute[185191]: 2026-01-27 15:41:52.093 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:55 compute-0 nova_compute[185191]: 2026-01-27 15:41:55.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:41:55 compute-0 nova_compute[185191]: 2026-01-27 15:41:55.946 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:41:55 compute-0 nova_compute[185191]: 2026-01-27 15:41:55.947 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:41:56 compute-0 nova_compute[185191]: 2026-01-27 15:41:56.117 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:56 compute-0 nova_compute[185191]: 2026-01-27 15:41:56.392 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:41:56 compute-0 nova_compute[185191]: 2026-01-27 15:41:56.393 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:41:56 compute-0 nova_compute[185191]: 2026-01-27 15:41:56.393 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:41:56 compute-0 nova_compute[185191]: 2026-01-27 15:41:56.394 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f8fa4ecf-1446-421b-893d-f2b34f89da54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:41:57 compute-0 nova_compute[185191]: 2026-01-27 15:41:57.095 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:41:57 compute-0 podman[253467]: 2026-01-27 15:41:57.346767074 +0000 UTC m=+0.094293399 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 15:41:58 compute-0 nova_compute[185191]: 2026-01-27 15:41:58.314 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updating instance_info_cache with network_info: [{"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:41:58 compute-0 nova_compute[185191]: 2026-01-27 15:41:58.579 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:41:58 compute-0 nova_compute[185191]: 2026-01-27 15:41:58.579 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:41:58 compute-0 nova_compute[185191]: 2026-01-27 15:41:58.580 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:41:58 compute-0 nova_compute[185191]: 2026-01-27 15:41:58.581 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:41:58 compute-0 nova_compute[185191]: 2026-01-27 15:41:58.581 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:41:59 compute-0 podman[201073]: time="2026-01-27T15:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:41:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:41:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 27 15:42:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:42:00.263 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:42:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:42:00.264 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:42:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:42:00.264 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:42:01 compute-0 nova_compute[185191]: 2026-01-27 15:42:01.120 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:01 compute-0 podman[253487]: 2026-01-27 15:42:01.302810663 +0000 UTC m=+0.055738355 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:42:01 compute-0 podman[253486]: 2026-01-27 15:42:01.303408879 +0000 UTC m=+0.060476752 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, config_id=kepler, release-0.7.12=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, maintainer=Red Hat, Inc., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, name=ubi9)
Jan 27 15:42:01 compute-0 openstack_network_exporter[204239]: ERROR   15:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:42:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:42:01 compute-0 openstack_network_exporter[204239]: ERROR   15:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:42:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:42:02 compute-0 nova_compute[185191]: 2026-01-27 15:42:02.098 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:03 compute-0 podman[253529]: 2026-01-27 15:42:03.344351251 +0000 UTC m=+0.095444121 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:42:05 compute-0 nova_compute[185191]: 2026-01-27 15:42:05.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:42:06 compute-0 nova_compute[185191]: 2026-01-27 15:42:06.122 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:07 compute-0 nova_compute[185191]: 2026-01-27 15:42:07.101 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:11 compute-0 nova_compute[185191]: 2026-01-27 15:42:11.123 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:12 compute-0 nova_compute[185191]: 2026-01-27 15:42:12.106 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:13 compute-0 ovn_controller[97541]: 2026-01-27T15:42:13Z|00170|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Jan 27 15:42:16 compute-0 nova_compute[185191]: 2026-01-27 15:42:16.127 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:16 compute-0 podman[253552]: 2026-01-27 15:42:16.340540187 +0000 UTC m=+0.095255075 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 27 15:42:17 compute-0 nova_compute[185191]: 2026-01-27 15:42:17.109 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:18 compute-0 podman[253573]: 2026-01-27 15:42:18.315823349 +0000 UTC m=+0.075287550 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052)
Jan 27 15:42:18 compute-0 podman[253575]: 2026-01-27 15:42:18.323146045 +0000 UTC m=+0.075578658 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.6, distribution-scope=public, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 27 15:42:18 compute-0 podman[253574]: 2026-01-27 15:42:18.357931078 +0000 UTC m=+0.113744971 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 15:42:19 compute-0 ovn_controller[97541]: 2026-01-27T15:42:19Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9b:9a:3f 10.100.1.182
Jan 27 15:42:19 compute-0 ovn_controller[97541]: 2026-01-27T15:42:19Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:9a:3f 10.100.1.182
Jan 27 15:42:21 compute-0 nova_compute[185191]: 2026-01-27 15:42:21.129 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:22 compute-0 nova_compute[185191]: 2026-01-27 15:42:22.111 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:26 compute-0 nova_compute[185191]: 2026-01-27 15:42:26.131 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:27 compute-0 nova_compute[185191]: 2026-01-27 15:42:27.114 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:28 compute-0 podman[253651]: 2026-01-27 15:42:28.319875586 +0000 UTC m=+0.076786100 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 15:42:29 compute-0 podman[201073]: time="2026-01-27T15:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:42:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:42:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4389 "" "Go-http-client/1.1"
Jan 27 15:42:31 compute-0 nova_compute[185191]: 2026-01-27 15:42:31.133 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:31 compute-0 openstack_network_exporter[204239]: ERROR   15:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:42:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:42:31 compute-0 openstack_network_exporter[204239]: ERROR   15:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:42:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:42:31 compute-0 podman[253671]: 2026-01-27 15:42:31.801583854 +0000 UTC m=+0.056629499 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:42:31 compute-0 podman[253670]: 2026-01-27 15:42:31.816108824 +0000 UTC m=+0.075006873 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., config_id=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 15:42:32 compute-0 nova_compute[185191]: 2026-01-27 15:42:32.117 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:34 compute-0 podman[253710]: 2026-01-27 15:42:34.325894168 +0000 UTC m=+0.081408724 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:42:36 compute-0 nova_compute[185191]: 2026-01-27 15:42:36.136 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:37 compute-0 nova_compute[185191]: 2026-01-27 15:42:37.121 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:41 compute-0 nova_compute[185191]: 2026-01-27 15:42:41.138 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:42 compute-0 nova_compute[185191]: 2026-01-27 15:42:42.125 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:42 compute-0 nova_compute[185191]: 2026-01-27 15:42:42.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:42:42 compute-0 nova_compute[185191]: 2026-01-27 15:42:42.982 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:42:42 compute-0 nova_compute[185191]: 2026-01-27 15:42:42.983 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:42:42 compute-0 nova_compute[185191]: 2026-01-27 15:42:42.983 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:42:42 compute-0 nova_compute[185191]: 2026-01-27 15:42:42.984 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:42:43 compute-0 nova_compute[185191]: 2026-01-27 15:42:43.078 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:42:43 compute-0 nova_compute[185191]: 2026-01-27 15:42:43.138 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:42:43 compute-0 nova_compute[185191]: 2026-01-27 15:42:43.139 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:42:43 compute-0 nova_compute[185191]: 2026-01-27 15:42:43.198 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:42:43 compute-0 nova_compute[185191]: 2026-01-27 15:42:43.497 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:42:43 compute-0 nova_compute[185191]: 2026-01-27 15:42:43.498 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5148MB free_disk=72.3138313293457GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:42:43 compute-0 nova_compute[185191]: 2026-01-27 15:42:43.498 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:42:43 compute-0 nova_compute[185191]: 2026-01-27 15:42:43.499 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:42:43 compute-0 nova_compute[185191]: 2026-01-27 15:42:43.579 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance f8fa4ecf-1446-421b-893d-f2b34f89da54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:42:43 compute-0 nova_compute[185191]: 2026-01-27 15:42:43.580 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:42:43 compute-0 nova_compute[185191]: 2026-01-27 15:42:43.580 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:42:43 compute-0 nova_compute[185191]: 2026-01-27 15:42:43.650 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:42:43 compute-0 nova_compute[185191]: 2026-01-27 15:42:43.672 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:42:43 compute-0 nova_compute[185191]: 2026-01-27 15:42:43.695 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:42:43 compute-0 nova_compute[185191]: 2026-01-27 15:42:43.696 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:42:46 compute-0 nova_compute[185191]: 2026-01-27 15:42:46.142 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:47 compute-0 nova_compute[185191]: 2026-01-27 15:42:47.128 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:47 compute-0 podman[253740]: 2026-01-27 15:42:47.323913445 +0000 UTC m=+0.078554518 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 27 15:42:49 compute-0 podman[253756]: 2026-01-27 15:42:49.316265773 +0000 UTC m=+0.071584111 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 15:42:49 compute-0 podman[253758]: 2026-01-27 15:42:49.323971149 +0000 UTC m=+0.069656128 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, distribution-scope=public, release=1755695350, io.buildah.version=1.33.7)
Jan 27 15:42:49 compute-0 podman[253757]: 2026-01-27 15:42:49.360003856 +0000 UTC m=+0.110090094 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 27 15:42:50 compute-0 nova_compute[185191]: 2026-01-27 15:42:50.696 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:42:50 compute-0 nova_compute[185191]: 2026-01-27 15:42:50.696 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:42:50 compute-0 nova_compute[185191]: 2026-01-27 15:42:50.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:42:51 compute-0 nova_compute[185191]: 2026-01-27 15:42:51.142 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:52 compute-0 nova_compute[185191]: 2026-01-27 15:42:52.132 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:52 compute-0 nova_compute[185191]: 2026-01-27 15:42:52.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:42:52 compute-0 nova_compute[185191]: 2026-01-27 15:42:52.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:42:55 compute-0 nova_compute[185191]: 2026-01-27 15:42:55.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:42:55 compute-0 nova_compute[185191]: 2026-01-27 15:42:55.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:42:55 compute-0 nova_compute[185191]: 2026-01-27 15:42:55.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:42:56 compute-0 nova_compute[185191]: 2026-01-27 15:42:56.144 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:56 compute-0 nova_compute[185191]: 2026-01-27 15:42:56.280 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:42:56 compute-0 nova_compute[185191]: 2026-01-27 15:42:56.280 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:42:56 compute-0 nova_compute[185191]: 2026-01-27 15:42:56.280 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:42:56 compute-0 nova_compute[185191]: 2026-01-27 15:42:56.281 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f8fa4ecf-1446-421b-893d-f2b34f89da54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:42:57 compute-0 nova_compute[185191]: 2026-01-27 15:42:57.135 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:42:58 compute-0 nova_compute[185191]: 2026-01-27 15:42:58.303 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updating instance_info_cache with network_info: [{"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:42:58 compute-0 nova_compute[185191]: 2026-01-27 15:42:58.331 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:42:58 compute-0 nova_compute[185191]: 2026-01-27 15:42:58.331 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:42:58 compute-0 nova_compute[185191]: 2026-01-27 15:42:58.332 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:42:58 compute-0 nova_compute[185191]: 2026-01-27 15:42:58.332 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:42:58 compute-0 nova_compute[185191]: 2026-01-27 15:42:58.332 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:42:59 compute-0 podman[253821]: 2026-01-27 15:42:59.299574014 +0000 UTC m=+0.061063819 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 27 15:42:59 compute-0 podman[201073]: time="2026-01-27T15:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:42:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:42:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 27 15:43:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:43:00.265 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:43:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:43:00.265 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:43:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:43:00.266 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:43:01 compute-0 nova_compute[185191]: 2026-01-27 15:43:01.148 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:01 compute-0 openstack_network_exporter[204239]: ERROR   15:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:43:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:43:01 compute-0 openstack_network_exporter[204239]: ERROR   15:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:43:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:43:02 compute-0 nova_compute[185191]: 2026-01-27 15:43:02.141 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:02 compute-0 podman[253840]: 2026-01-27 15:43:02.315278116 +0000 UTC m=+0.065708293 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:43:02 compute-0 podman[253839]: 2026-01-27 15:43:02.335804677 +0000 UTC m=+0.092511312 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, release=1214.1726694543, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., name=ubi9, release-0.7.12=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, config_id=kepler, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Jan 27 15:43:05 compute-0 podman[253881]: 2026-01-27 15:43:05.304163449 +0000 UTC m=+0.056610859 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:43:06 compute-0 nova_compute[185191]: 2026-01-27 15:43:06.154 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:06 compute-0 nova_compute[185191]: 2026-01-27 15:43:06.947 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:43:07 compute-0 nova_compute[185191]: 2026-01-27 15:43:07.146 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.993 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.994 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.994 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.998 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f8fa4ecf-1446-421b-893d-f2b34f89da54 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 15:43:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:10.999 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f8fa4ecf-1446-421b-893d-f2b34f89da54 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82c957adbc17ae7d91b95e243ef95edcae050b803dbf40e883e7549d3d32b40a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 15:43:11 compute-0 nova_compute[185191]: 2026-01-27 15:43:11.158 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.467 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Tue, 27 Jan 2026 15:43:11 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-bd59455b-e69f-43bb-a566-0234c8d9770f x-openstack-request-id: req-bd59455b-e69f-43bb-a566-0234c8d9770f _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.467 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "f8fa4ecf-1446-421b-893d-f2b34f89da54", "name": "te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4", "status": "ACTIVE", "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "user_id": "2f735dc3417d4dc1830a1081fe9a604b", "metadata": {"metering.server_group": "b3308bb6-f54d-4153-86c0-fa8fa74a39af"}, "hostId": "a1663c21f4c2586f65c0b6541b29f837c5b2bf25e66085f6a331b3fc", "image": {"id": "9d30f498-7a22-4c96-a758-84b2da277162", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/9d30f498-7a22-4c96-a758-84b2da277162"}]}, "flavor": {"id": "aed09843-3292-40b2-b829-c4ed118e135f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/aed09843-3292-40b2-b829-c4ed118e135f"}]}, "created": "2026-01-27T15:41:34Z", "updated": "2026-01-27T15:41:44Z", "addresses": {"": [{"version": 4, "addr": "10.100.1.182", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:9b:9a:3f"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/f8fa4ecf-1446-421b-893d-f2b34f89da54"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/f8fa4ecf-1446-421b-893d-f2b34f89da54"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-27T15:41:44.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.467 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f8fa4ecf-1446-421b-893d-f2b34f89da54 used request id req-bd59455b-e69f-43bb-a566-0234c8d9770f request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.469 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f8fa4ecf-1446-421b-893d-f2b34f89da54', 'name': 'te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '9d30f498-7a22-4c96-a758-84b2da277162'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '20f0077bc9bd475ebff1667438d2013e', 'user_id': '2f735dc3417d4dc1830a1081fe9a604b', 'hostId': 'a1663c21f4c2586f65c0b6541b29f837c5b2bf25e66085f6a331b3fc', 'status': 'active', 'metadata': {'metering.server_group': 'b3308bb6-f54d-4153-86c0-fa8fa74a39af'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.469 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.469 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:43:11.469870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.514 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.latency volume: 3559224028 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.515 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.515 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.516 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.516 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.516 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.516 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.516 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.requests volume: 309 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.517 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.517 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:43:11.516531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.517 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.517 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.517 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.517 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:43:11.517936) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.531 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.531 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.532 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.532 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.532 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.532 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.532 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.533 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:43:11.532789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.536 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for f8fa4ecf-1446-421b-893d-f2b34f89da54 / tap9a8c7659-ad inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.536 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.536 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.536 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.537 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.537 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.537 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.537 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.538 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.538 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:43:11.537231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.538 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:43:11.538333) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.538 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.539 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.539 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.539 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.539 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.539 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:43:11.539385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.539 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.540 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.540 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.540 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.540 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.540 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:43:11.540525) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.563 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/cpu volume: 85150000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.563 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.563 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.563 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.564 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.564 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.564 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.565 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.564 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:43:11.564123) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.565 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.565 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.566 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.566 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/memory.usage volume: 43.3515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.566 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:43:11.565344) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.567 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.567 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:43:11.566544) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.568 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.568 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:43:11.567649) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.569 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4>]
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.569 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.569 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.569 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.569 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T15:43:11.568637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.570 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.570 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.570 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.570 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:43:11.569556) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:43:11.570774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.571 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.572 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.572 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.573 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.573 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:43:11.571925) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.573 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:43:11.573120) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.573 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.574 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.574 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:43:11.574360) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.575 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.575 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.bytes volume: 28929024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.575 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.576 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.576 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.576 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.577 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:43:11.575495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.577 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:43:11.576920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.578 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.latency volume: 1006510145 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:43:11.578129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.578 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.latency volume: 65762611 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.578 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.579 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.579 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.579 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.579 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.579 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.579 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4>]
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.580 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T15:43:11.579313) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.580 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.requests volume: 1037 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.580 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.581 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.582 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.581 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:43:11.580442) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.582 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.582 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.583 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:43:11.581924) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.583 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.584 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.bytes volume: 72822784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:43:11.583123) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.584 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.584 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:43:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:43:11.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:43:11.584194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:43:12 compute-0 nova_compute[185191]: 2026-01-27 15:43:12.148 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:16 compute-0 nova_compute[185191]: 2026-01-27 15:43:16.158 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:17 compute-0 nova_compute[185191]: 2026-01-27 15:43:17.153 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:18 compute-0 podman[253904]: 2026-01-27 15:43:18.322969832 +0000 UTC m=+0.078352582 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 15:43:19 compute-0 sshd-session[253923]: Invalid user sol from 2.57.122.238 port 52860
Jan 27 15:43:19 compute-0 sshd-session[253923]: Connection closed by invalid user sol 2.57.122.238 port 52860 [preauth]
Jan 27 15:43:20 compute-0 podman[253926]: 2026-01-27 15:43:20.328315189 +0000 UTC m=+0.074138399 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 15:43:20 compute-0 podman[253928]: 2026-01-27 15:43:20.358698854 +0000 UTC m=+0.098507003 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Jan 27 15:43:20 compute-0 podman[253927]: 2026-01-27 15:43:20.381412963 +0000 UTC m=+0.123363169 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 15:43:21 compute-0 nova_compute[185191]: 2026-01-27 15:43:21.161 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:22 compute-0 nova_compute[185191]: 2026-01-27 15:43:22.155 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:26 compute-0 nova_compute[185191]: 2026-01-27 15:43:26.164 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:27 compute-0 nova_compute[185191]: 2026-01-27 15:43:27.159 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:28 compute-0 sshd-session[253993]: Invalid user solana from 45.148.10.240 port 33410
Jan 27 15:43:28 compute-0 sshd-session[253993]: Connection closed by invalid user solana 45.148.10.240 port 33410 [preauth]
Jan 27 15:43:29 compute-0 podman[201073]: time="2026-01-27T15:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:43:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:43:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4387 "" "Go-http-client/1.1"
Jan 27 15:43:30 compute-0 podman[253995]: 2026-01-27 15:43:30.32317268 +0000 UTC m=+0.077710565 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Jan 27 15:43:31 compute-0 nova_compute[185191]: 2026-01-27 15:43:31.166 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:31 compute-0 openstack_network_exporter[204239]: ERROR   15:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:43:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:43:31 compute-0 openstack_network_exporter[204239]: ERROR   15:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:43:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:43:32 compute-0 nova_compute[185191]: 2026-01-27 15:43:32.163 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:33 compute-0 podman[254016]: 2026-01-27 15:43:33.311171938 +0000 UTC m=+0.062897547 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:43:33 compute-0 podman[254015]: 2026-01-27 15:43:33.329727656 +0000 UTC m=+0.083996334 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_id=kepler, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1214.1726694543, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.tags=base rhel9, name=ubi9, io.openshift.expose-services=)
Jan 27 15:43:36 compute-0 nova_compute[185191]: 2026-01-27 15:43:36.167 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:36 compute-0 podman[254055]: 2026-01-27 15:43:36.317342534 +0000 UTC m=+0.079174494 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:43:37 compute-0 nova_compute[185191]: 2026-01-27 15:43:37.167 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:41 compute-0 nova_compute[185191]: 2026-01-27 15:43:41.169 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:42 compute-0 nova_compute[185191]: 2026-01-27 15:43:42.169 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:43 compute-0 nova_compute[185191]: 2026-01-27 15:43:43.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:43:44 compute-0 nova_compute[185191]: 2026-01-27 15:43:44.030 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:43:44 compute-0 nova_compute[185191]: 2026-01-27 15:43:44.030 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:43:44 compute-0 nova_compute[185191]: 2026-01-27 15:43:44.031 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:43:44 compute-0 nova_compute[185191]: 2026-01-27 15:43:44.031 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:43:44 compute-0 nova_compute[185191]: 2026-01-27 15:43:44.352 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:43:44 compute-0 nova_compute[185191]: 2026-01-27 15:43:44.417 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:43:44 compute-0 nova_compute[185191]: 2026-01-27 15:43:44.418 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:43:44 compute-0 nova_compute[185191]: 2026-01-27 15:43:44.480 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:43:44 compute-0 nova_compute[185191]: 2026-01-27 15:43:44.787 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:43:44 compute-0 nova_compute[185191]: 2026-01-27 15:43:44.788 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5147MB free_disk=72.3138313293457GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:43:44 compute-0 nova_compute[185191]: 2026-01-27 15:43:44.789 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:43:44 compute-0 nova_compute[185191]: 2026-01-27 15:43:44.789 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:43:44 compute-0 nova_compute[185191]: 2026-01-27 15:43:44.949 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance f8fa4ecf-1446-421b-893d-f2b34f89da54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:43:44 compute-0 nova_compute[185191]: 2026-01-27 15:43:44.950 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:43:44 compute-0 nova_compute[185191]: 2026-01-27 15:43:44.950 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:43:45 compute-0 nova_compute[185191]: 2026-01-27 15:43:45.000 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:43:45 compute-0 nova_compute[185191]: 2026-01-27 15:43:45.049 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:43:45 compute-0 nova_compute[185191]: 2026-01-27 15:43:45.051 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:43:45 compute-0 nova_compute[185191]: 2026-01-27 15:43:45.051 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:43:46 compute-0 nova_compute[185191]: 2026-01-27 15:43:46.173 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:46 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 15:43:47 compute-0 nova_compute[185191]: 2026-01-27 15:43:47.172 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:49 compute-0 podman[254086]: 2026-01-27 15:43:49.302169156 +0000 UTC m=+0.055357666 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 15:43:51 compute-0 nova_compute[185191]: 2026-01-27 15:43:51.175 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:51 compute-0 podman[254106]: 2026-01-27 15:43:51.325530147 +0000 UTC m=+0.077787117 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 15:43:51 compute-0 podman[254108]: 2026-01-27 15:43:51.363345971 +0000 UTC m=+0.102930062 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., config_id=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.buildah.version=1.33.7, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Jan 27 15:43:51 compute-0 podman[254107]: 2026-01-27 15:43:51.378265401 +0000 UTC m=+0.121956662 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 15:43:52 compute-0 nova_compute[185191]: 2026-01-27 15:43:52.176 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:53 compute-0 nova_compute[185191]: 2026-01-27 15:43:53.053 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:43:53 compute-0 nova_compute[185191]: 2026-01-27 15:43:53.053 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:43:53 compute-0 nova_compute[185191]: 2026-01-27 15:43:53.054 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:43:53 compute-0 nova_compute[185191]: 2026-01-27 15:43:53.054 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:43:55 compute-0 nova_compute[185191]: 2026-01-27 15:43:55.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:43:55 compute-0 nova_compute[185191]: 2026-01-27 15:43:55.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:43:56 compute-0 nova_compute[185191]: 2026-01-27 15:43:56.177 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:56 compute-0 nova_compute[185191]: 2026-01-27 15:43:56.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:43:56 compute-0 nova_compute[185191]: 2026-01-27 15:43:56.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:43:56 compute-0 nova_compute[185191]: 2026-01-27 15:43:56.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:43:57 compute-0 nova_compute[185191]: 2026-01-27 15:43:57.179 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:43:57 compute-0 nova_compute[185191]: 2026-01-27 15:43:57.336 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:43:57 compute-0 nova_compute[185191]: 2026-01-27 15:43:57.337 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:43:57 compute-0 nova_compute[185191]: 2026-01-27 15:43:57.337 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:43:57 compute-0 nova_compute[185191]: 2026-01-27 15:43:57.338 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f8fa4ecf-1446-421b-893d-f2b34f89da54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:43:59 compute-0 podman[201073]: time="2026-01-27T15:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:43:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:43:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4392 "" "Go-http-client/1.1"
Jan 27 15:44:00 compute-0 nova_compute[185191]: 2026-01-27 15:44:00.244 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updating instance_info_cache with network_info: [{"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:44:00 compute-0 nova_compute[185191]: 2026-01-27 15:44:00.260 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:44:00 compute-0 nova_compute[185191]: 2026-01-27 15:44:00.260 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:44:00 compute-0 nova_compute[185191]: 2026-01-27 15:44:00.261 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:44:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:44:00.266 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:44:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:44:00.267 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:44:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:44:00.267 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:44:01 compute-0 nova_compute[185191]: 2026-01-27 15:44:01.180 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:01 compute-0 podman[254169]: 2026-01-27 15:44:01.319868343 +0000 UTC m=+0.079744099 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 15:44:01 compute-0 openstack_network_exporter[204239]: ERROR   15:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:44:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:44:01 compute-0 openstack_network_exporter[204239]: ERROR   15:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:44:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:44:02 compute-0 nova_compute[185191]: 2026-01-27 15:44:02.182 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:04 compute-0 podman[254188]: 2026-01-27 15:44:04.310855682 +0000 UTC m=+0.068201030 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_id=kepler, container_name=kepler)
Jan 27 15:44:04 compute-0 podman[254189]: 2026-01-27 15:44:04.337745883 +0000 UTC m=+0.091632468 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:44:06 compute-0 nova_compute[185191]: 2026-01-27 15:44:06.183 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:07 compute-0 nova_compute[185191]: 2026-01-27 15:44:07.186 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:07 compute-0 podman[254231]: 2026-01-27 15:44:07.337171708 +0000 UTC m=+0.098769840 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:44:07 compute-0 nova_compute[185191]: 2026-01-27 15:44:07.946 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:44:11 compute-0 nova_compute[185191]: 2026-01-27 15:44:11.185 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:12 compute-0 nova_compute[185191]: 2026-01-27 15:44:12.191 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:16 compute-0 nova_compute[185191]: 2026-01-27 15:44:16.188 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:17 compute-0 nova_compute[185191]: 2026-01-27 15:44:17.194 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:20 compute-0 podman[254255]: 2026-01-27 15:44:20.30294448 +0000 UTC m=+0.063788782 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 15:44:21 compute-0 nova_compute[185191]: 2026-01-27 15:44:21.191 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:22 compute-0 nova_compute[185191]: 2026-01-27 15:44:22.198 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:22 compute-0 podman[254276]: 2026-01-27 15:44:22.323905165 +0000 UTC m=+0.070126221 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, version=9.6, com.redhat.component=ubi9-minimal-container, architecture=x86_64, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.buildah.version=1.33.7)
Jan 27 15:44:22 compute-0 podman[254274]: 2026-01-27 15:44:22.324252785 +0000 UTC m=+0.077683905 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Jan 27 15:44:22 compute-0 podman[254275]: 2026-01-27 15:44:22.364154835 +0000 UTC m=+0.115005295 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 15:44:26 compute-0 nova_compute[185191]: 2026-01-27 15:44:26.195 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:27 compute-0 nova_compute[185191]: 2026-01-27 15:44:27.202 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:29 compute-0 podman[201073]: time="2026-01-27T15:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:44:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:44:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 27 15:44:31 compute-0 nova_compute[185191]: 2026-01-27 15:44:31.196 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:31 compute-0 openstack_network_exporter[204239]: ERROR   15:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:44:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:44:31 compute-0 openstack_network_exporter[204239]: ERROR   15:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:44:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:44:32 compute-0 nova_compute[185191]: 2026-01-27 15:44:32.205 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:32 compute-0 podman[254338]: 2026-01-27 15:44:32.31744542 +0000 UTC m=+0.071034185 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 27 15:44:35 compute-0 podman[254359]: 2026-01-27 15:44:35.304603347 +0000 UTC m=+0.056562827 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:44:35 compute-0 podman[254358]: 2026-01-27 15:44:35.321634944 +0000 UTC m=+0.075796583 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, container_name=kepler, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, distribution-scope=public)
Jan 27 15:44:36 compute-0 nova_compute[185191]: 2026-01-27 15:44:36.201 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:37 compute-0 nova_compute[185191]: 2026-01-27 15:44:37.209 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:38 compute-0 podman[254400]: 2026-01-27 15:44:38.306651773 +0000 UTC m=+0.064179752 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:44:41 compute-0 nova_compute[185191]: 2026-01-27 15:44:41.204 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:42 compute-0 nova_compute[185191]: 2026-01-27 15:44:42.213 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:43 compute-0 nova_compute[185191]: 2026-01-27 15:44:43.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.109 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.110 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.110 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.111 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.187 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.251 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.253 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.310 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.652 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.657 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5146MB free_disk=72.3138313293457GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.659 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.660 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.754 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance f8fa4ecf-1446-421b-893d-f2b34f89da54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.755 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.755 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.801 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.817 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.819 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:44:44 compute-0 nova_compute[185191]: 2026-01-27 15:44:44.819 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:44:46 compute-0 nova_compute[185191]: 2026-01-27 15:44:46.207 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:47 compute-0 nova_compute[185191]: 2026-01-27 15:44:47.217 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:51 compute-0 nova_compute[185191]: 2026-01-27 15:44:51.209 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:51 compute-0 podman[254431]: 2026-01-27 15:44:51.301197095 +0000 UTC m=+0.065191119 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 15:44:52 compute-0 nova_compute[185191]: 2026-01-27 15:44:52.220 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:52 compute-0 nova_compute[185191]: 2026-01-27 15:44:52.814 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:44:52 compute-0 nova_compute[185191]: 2026-01-27 15:44:52.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:44:53 compute-0 podman[254449]: 2026-01-27 15:44:53.32171206 +0000 UTC m=+0.081657911 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260126, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052)
Jan 27 15:44:53 compute-0 podman[254451]: 2026-01-27 15:44:53.32207648 +0000 UTC m=+0.072843685 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.6, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 27 15:44:53 compute-0 podman[254450]: 2026-01-27 15:44:53.392769425 +0000 UTC m=+0.147065684 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:44:53 compute-0 nova_compute[185191]: 2026-01-27 15:44:53.940 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:44:53 compute-0 nova_compute[185191]: 2026-01-27 15:44:53.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:44:53 compute-0 nova_compute[185191]: 2026-01-27 15:44:53.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:44:56 compute-0 nova_compute[185191]: 2026-01-27 15:44:56.211 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:57 compute-0 nova_compute[185191]: 2026-01-27 15:44:57.223 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:44:57 compute-0 nova_compute[185191]: 2026-01-27 15:44:57.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:44:57 compute-0 nova_compute[185191]: 2026-01-27 15:44:57.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:44:57 compute-0 nova_compute[185191]: 2026-01-27 15:44:57.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:44:58 compute-0 nova_compute[185191]: 2026-01-27 15:44:58.274 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:44:58 compute-0 nova_compute[185191]: 2026-01-27 15:44:58.275 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:44:58 compute-0 nova_compute[185191]: 2026-01-27 15:44:58.275 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:44:58 compute-0 nova_compute[185191]: 2026-01-27 15:44:58.275 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f8fa4ecf-1446-421b-893d-f2b34f89da54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:44:59 compute-0 podman[201073]: time="2026-01-27T15:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:44:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:44:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4389 "" "Go-http-client/1.1"
Jan 27 15:45:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:45:00.268 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:45:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:45:00.268 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:45:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:45:00.269 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:45:01 compute-0 nova_compute[185191]: 2026-01-27 15:45:01.133 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updating instance_info_cache with network_info: [{"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:45:01 compute-0 nova_compute[185191]: 2026-01-27 15:45:01.167 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:45:01 compute-0 nova_compute[185191]: 2026-01-27 15:45:01.168 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:45:01 compute-0 nova_compute[185191]: 2026-01-27 15:45:01.169 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:45:01 compute-0 nova_compute[185191]: 2026-01-27 15:45:01.169 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:45:01 compute-0 nova_compute[185191]: 2026-01-27 15:45:01.169 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:45:01 compute-0 nova_compute[185191]: 2026-01-27 15:45:01.213 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:01 compute-0 openstack_network_exporter[204239]: ERROR   15:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:45:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:45:01 compute-0 openstack_network_exporter[204239]: ERROR   15:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:45:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:45:02 compute-0 nova_compute[185191]: 2026-01-27 15:45:02.226 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:03 compute-0 podman[254513]: 2026-01-27 15:45:03.341582991 +0000 UTC m=+0.094046013 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 15:45:06 compute-0 nova_compute[185191]: 2026-01-27 15:45:06.222 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:06 compute-0 podman[254532]: 2026-01-27 15:45:06.315965414 +0000 UTC m=+0.071487418 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, vcs-type=git, io.openshift.expose-services=, container_name=kepler, version=9.4, distribution-scope=public, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.buildah.version=1.29.0)
Jan 27 15:45:06 compute-0 podman[254533]: 2026-01-27 15:45:06.318818431 +0000 UTC m=+0.066393691 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 15:45:07 compute-0 nova_compute[185191]: 2026-01-27 15:45:07.230 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:09 compute-0 podman[254572]: 2026-01-27 15:45:09.291592861 +0000 UTC m=+0.052749915 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:45:09 compute-0 nova_compute[185191]: 2026-01-27 15:45:09.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:45:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:10.994 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:45:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:10.995 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:45:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:10.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.001 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f8fa4ecf-1446-421b-893d-f2b34f89da54', 'name': 'te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '9d30f498-7a22-4c96-a758-84b2da277162'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '20f0077bc9bd475ebff1667438d2013e', 'user_id': '2f735dc3417d4dc1830a1081fe9a604b', 'hostId': 'a1663c21f4c2586f65c0b6541b29f837c5b2bf25e66085f6a331b3fc', 'status': 'active', 'metadata': {'metering.server_group': 'b3308bb6-f54d-4153-86c0-fa8fa74a39af'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.003 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.003 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.003 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.003 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.004 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:45:11.003476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.034 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.latency volume: 3583670186 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.035 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.036 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.036 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.036 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.036 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.036 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.requests volume: 317 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.037 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:45:11.036320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.037 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:45:11.038108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.050 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.050 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.051 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.051 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.051 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.051 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:45:11.051762) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.056 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.057 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.058 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.058 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.058 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.059 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:45:11.057542) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:45:11.058858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.060 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.060 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:45:11.060188) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.061 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.061 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:45:11.061543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.078 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/cpu volume: 204330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.079 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.079 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.080 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.080 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.080 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:45:11.079839) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.081 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.081 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.081 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.081 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.082 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.082 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/memory.usage volume: 43.58203125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.083 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:45:11.081225) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:45:11.082449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.084 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.085 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:45:11.084884) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.086 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.086 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.086 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.087 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.087 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.087 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.088 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.088 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.088 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.088 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.089 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.089 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.089 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.089 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.089 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.090 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.090 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.090 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.090 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.091 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.091 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.091 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.091 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.091 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.091 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.092 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.092 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.092 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.092 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.092 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.093 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.bytes volume: 28929024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.093 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.093 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.094 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.094 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.094 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.094 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.094 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.095 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.095 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.095 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.095 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.095 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.095 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.latency volume: 1006510145 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.096 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.latency volume: 65762611 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.096 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.097 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.097 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.097 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.097 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.097 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.requests volume: 1037 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.098 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.098 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.099 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.099 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.099 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.099 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.099 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.099 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:45:11.086837) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.100 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.100 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.100 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.100 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.101 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.101 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.101 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.101 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.101 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.102 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.102 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.102 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.102 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:45:11.087974) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.102 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.bytes volume: 72871936 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.103 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.103 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:45:11.089160) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:45:11.090201) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:45:11.091637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:45:11.092930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:45:11.094535) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.109 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:45:11.095855) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.109 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:45:11.097736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.109 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:45:11.099346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:45:11.101192) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:45:11.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:45:11.102351) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:45:11 compute-0 nova_compute[185191]: 2026-01-27 15:45:11.224 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:12 compute-0 nova_compute[185191]: 2026-01-27 15:45:12.233 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:16 compute-0 nova_compute[185191]: 2026-01-27 15:45:16.227 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:17 compute-0 nova_compute[185191]: 2026-01-27 15:45:17.236 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:21 compute-0 nova_compute[185191]: 2026-01-27 15:45:21.228 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:22 compute-0 nova_compute[185191]: 2026-01-27 15:45:22.238 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:22 compute-0 podman[254598]: 2026-01-27 15:45:22.321303057 +0000 UTC m=+0.074119148 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 15:45:24 compute-0 podman[254617]: 2026-01-27 15:45:24.365416004 +0000 UTC m=+0.111454590 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 27 15:45:24 compute-0 podman[254619]: 2026-01-27 15:45:24.373148281 +0000 UTC m=+0.106066765 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vcs-type=git, name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 27 15:45:24 compute-0 podman[254618]: 2026-01-27 15:45:24.399445617 +0000 UTC m=+0.147567749 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 15:45:26 compute-0 nova_compute[185191]: 2026-01-27 15:45:26.232 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:27 compute-0 nova_compute[185191]: 2026-01-27 15:45:27.241 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:29 compute-0 podman[201073]: time="2026-01-27T15:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:45:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:45:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 27 15:45:31 compute-0 nova_compute[185191]: 2026-01-27 15:45:31.235 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:31 compute-0 openstack_network_exporter[204239]: ERROR   15:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:45:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:45:31 compute-0 openstack_network_exporter[204239]: ERROR   15:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:45:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:45:32 compute-0 nova_compute[185191]: 2026-01-27 15:45:32.244 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:34 compute-0 podman[254680]: 2026-01-27 15:45:34.316123006 +0000 UTC m=+0.069241461 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 15:45:35 compute-0 sshd-session[254699]: Invalid user stradal from 2.57.122.238 port 49410
Jan 27 15:45:35 compute-0 sshd-session[254699]: Connection closed by invalid user stradal 2.57.122.238 port 49410 [preauth]
Jan 27 15:45:36 compute-0 nova_compute[185191]: 2026-01-27 15:45:36.238 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:37 compute-0 nova_compute[185191]: 2026-01-27 15:45:37.254 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:37 compute-0 podman[254702]: 2026-01-27 15:45:37.329285142 +0000 UTC m=+0.076987388 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:45:37 compute-0 podman[254701]: 2026-01-27 15:45:37.347432159 +0000 UTC m=+0.099154123 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4)
Jan 27 15:45:39 compute-0 nova_compute[185191]: 2026-01-27 15:45:39.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:45:39 compute-0 nova_compute[185191]: 2026-01-27 15:45:39.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 15:45:40 compute-0 podman[254740]: 2026-01-27 15:45:40.310133811 +0000 UTC m=+0.065887130 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:45:41 compute-0 nova_compute[185191]: 2026-01-27 15:45:41.246 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:42 compute-0 nova_compute[185191]: 2026-01-27 15:45:42.270 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:44 compute-0 nova_compute[185191]: 2026-01-27 15:45:44.961 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:45:44 compute-0 nova_compute[185191]: 2026-01-27 15:45:44.993 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:45:44 compute-0 nova_compute[185191]: 2026-01-27 15:45:44.994 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:45:44 compute-0 nova_compute[185191]: 2026-01-27 15:45:44.995 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:45:44 compute-0 nova_compute[185191]: 2026-01-27 15:45:44.995 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:45:45 compute-0 nova_compute[185191]: 2026-01-27 15:45:45.080 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:45:45 compute-0 nova_compute[185191]: 2026-01-27 15:45:45.161 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:45:45 compute-0 nova_compute[185191]: 2026-01-27 15:45:45.163 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:45:45 compute-0 nova_compute[185191]: 2026-01-27 15:45:45.241 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:45:45 compute-0 sshd-session[254764]: Invalid user solana from 45.148.10.240 port 39298
Jan 27 15:45:45 compute-0 nova_compute[185191]: 2026-01-27 15:45:45.566 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:45:45 compute-0 nova_compute[185191]: 2026-01-27 15:45:45.568 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5144MB free_disk=72.31377029418945GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:45:45 compute-0 nova_compute[185191]: 2026-01-27 15:45:45.569 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:45:45 compute-0 nova_compute[185191]: 2026-01-27 15:45:45.569 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:45:45 compute-0 sshd-session[254764]: Connection closed by invalid user solana 45.148.10.240 port 39298 [preauth]
Jan 27 15:45:45 compute-0 nova_compute[185191]: 2026-01-27 15:45:45.754 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance f8fa4ecf-1446-421b-893d-f2b34f89da54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:45:45 compute-0 nova_compute[185191]: 2026-01-27 15:45:45.758 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:45:45 compute-0 nova_compute[185191]: 2026-01-27 15:45:45.759 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:45:45 compute-0 nova_compute[185191]: 2026-01-27 15:45:45.847 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing inventories for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 15:45:45 compute-0 nova_compute[185191]: 2026-01-27 15:45:45.918 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating ProviderTree inventory for provider dbf037fd-3291-487b-ae9c-69178dae2ebc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 15:45:45 compute-0 nova_compute[185191]: 2026-01-27 15:45:45.919 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 15:45:45 compute-0 nova_compute[185191]: 2026-01-27 15:45:45.940 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing aggregate associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 15:45:45 compute-0 nova_compute[185191]: 2026-01-27 15:45:45.975 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing trait associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, traits: HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_ACCELERATORS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 15:45:46 compute-0 nova_compute[185191]: 2026-01-27 15:45:46.028 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:45:46 compute-0 nova_compute[185191]: 2026-01-27 15:45:46.053 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:45:46 compute-0 nova_compute[185191]: 2026-01-27 15:45:46.056 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:45:46 compute-0 nova_compute[185191]: 2026-01-27 15:45:46.056 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:45:46 compute-0 nova_compute[185191]: 2026-01-27 15:45:46.249 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:47 compute-0 nova_compute[185191]: 2026-01-27 15:45:47.274 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:50 compute-0 nova_compute[185191]: 2026-01-27 15:45:50.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:45:51 compute-0 nova_compute[185191]: 2026-01-27 15:45:51.249 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:52 compute-0 nova_compute[185191]: 2026-01-27 15:45:52.278 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:52 compute-0 nova_compute[185191]: 2026-01-27 15:45:52.958 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:45:53 compute-0 podman[254773]: 2026-01-27 15:45:53.311841202 +0000 UTC m=+0.065155350 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Jan 27 15:45:53 compute-0 nova_compute[185191]: 2026-01-27 15:45:53.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:45:53 compute-0 nova_compute[185191]: 2026-01-27 15:45:53.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:45:54 compute-0 nova_compute[185191]: 2026-01-27 15:45:54.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:45:55 compute-0 podman[254791]: 2026-01-27 15:45:55.315802141 +0000 UTC m=+0.067330799 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 27 15:45:55 compute-0 podman[254793]: 2026-01-27 15:45:55.330835124 +0000 UTC m=+0.075693023 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, release=1755695350, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7)
Jan 27 15:45:55 compute-0 podman[254792]: 2026-01-27 15:45:55.376598293 +0000 UTC m=+0.126139348 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 27 15:45:55 compute-0 nova_compute[185191]: 2026-01-27 15:45:55.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:45:55 compute-0 nova_compute[185191]: 2026-01-27 15:45:55.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 15:45:55 compute-0 nova_compute[185191]: 2026-01-27 15:45:55.960 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 15:45:56 compute-0 nova_compute[185191]: 2026-01-27 15:45:56.252 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:57 compute-0 nova_compute[185191]: 2026-01-27 15:45:57.282 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:45:58 compute-0 nova_compute[185191]: 2026-01-27 15:45:58.961 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:45:58 compute-0 nova_compute[185191]: 2026-01-27 15:45:58.961 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:45:58 compute-0 nova_compute[185191]: 2026-01-27 15:45:58.962 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:45:59 compute-0 podman[201073]: time="2026-01-27T15:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:45:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:45:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4395 "" "Go-http-client/1.1"
Jan 27 15:46:00 compute-0 nova_compute[185191]: 2026-01-27 15:46:00.099 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:46:00 compute-0 nova_compute[185191]: 2026-01-27 15:46:00.100 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:46:00 compute-0 nova_compute[185191]: 2026-01-27 15:46:00.100 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:46:00 compute-0 nova_compute[185191]: 2026-01-27 15:46:00.100 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f8fa4ecf-1446-421b-893d-f2b34f89da54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:46:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:00.269 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:46:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:00.270 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:46:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:00.270 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:46:01 compute-0 nova_compute[185191]: 2026-01-27 15:46:01.253 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:01 compute-0 openstack_network_exporter[204239]: ERROR   15:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:46:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:46:01 compute-0 openstack_network_exporter[204239]: ERROR   15:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:46:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:46:02 compute-0 nova_compute[185191]: 2026-01-27 15:46:02.285 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:04 compute-0 nova_compute[185191]: 2026-01-27 15:46:04.111 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updating instance_info_cache with network_info: [{"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:46:04 compute-0 nova_compute[185191]: 2026-01-27 15:46:04.128 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:46:04 compute-0 nova_compute[185191]: 2026-01-27 15:46:04.128 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:46:04 compute-0 nova_compute[185191]: 2026-01-27 15:46:04.129 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:46:04 compute-0 nova_compute[185191]: 2026-01-27 15:46:04.129 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:46:04 compute-0 nova_compute[185191]: 2026-01-27 15:46:04.130 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:46:05 compute-0 podman[254853]: 2026-01-27 15:46:05.310023587 +0000 UTC m=+0.070567796 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 15:46:06 compute-0 nova_compute[185191]: 2026-01-27 15:46:06.255 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:07 compute-0 nova_compute[185191]: 2026-01-27 15:46:07.287 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:08 compute-0 podman[254874]: 2026-01-27 15:46:08.312396335 +0000 UTC m=+0.067441092 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:46:08 compute-0 podman[254873]: 2026-01-27 15:46:08.317849041 +0000 UTC m=+0.078596141 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, managed_by=edpm_ansible, release=1214.1726694543, io.openshift.expose-services=, release-0.7.12=, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, vcs-type=git, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:46:11 compute-0 nova_compute[185191]: 2026-01-27 15:46:11.257 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:11 compute-0 podman[254914]: 2026-01-27 15:46:11.299950165 +0000 UTC m=+0.061851312 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:46:11 compute-0 nova_compute[185191]: 2026-01-27 15:46:11.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:46:12 compute-0 nova_compute[185191]: 2026-01-27 15:46:12.290 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:16 compute-0 nova_compute[185191]: 2026-01-27 15:46:16.259 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:17 compute-0 nova_compute[185191]: 2026-01-27 15:46:17.293 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:21 compute-0 nova_compute[185191]: 2026-01-27 15:46:21.262 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:22 compute-0 nova_compute[185191]: 2026-01-27 15:46:22.296 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:24 compute-0 podman[254938]: 2026-01-27 15:46:24.309093185 +0000 UTC m=+0.057281089 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 15:46:26 compute-0 nova_compute[185191]: 2026-01-27 15:46:26.266 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:26 compute-0 podman[254956]: 2026-01-27 15:46:26.327322537 +0000 UTC m=+0.077229955 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Jan 27 15:46:26 compute-0 podman[254958]: 2026-01-27 15:46:26.354430425 +0000 UTC m=+0.094974381 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-type=git, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 15:46:26 compute-0 podman[254957]: 2026-01-27 15:46:26.392408875 +0000 UTC m=+0.138485880 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 15:46:27 compute-0 nova_compute[185191]: 2026-01-27 15:46:27.300 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:29 compute-0 nova_compute[185191]: 2026-01-27 15:46:29.588 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "a0b14d34-73c5-426d-8d69-793643148639" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:46:29 compute-0 nova_compute[185191]: 2026-01-27 15:46:29.589 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "a0b14d34-73c5-426d-8d69-793643148639" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:46:29 compute-0 nova_compute[185191]: 2026-01-27 15:46:29.632 185195 DEBUG nova.compute.manager [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 15:46:29 compute-0 nova_compute[185191]: 2026-01-27 15:46:29.719 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:46:29 compute-0 nova_compute[185191]: 2026-01-27 15:46:29.720 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:46:29 compute-0 nova_compute[185191]: 2026-01-27 15:46:29.728 185195 DEBUG nova.virt.hardware [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 15:46:29 compute-0 nova_compute[185191]: 2026-01-27 15:46:29.729 185195 INFO nova.compute.claims [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Claim successful on node compute-0.ctlplane.example.com
Jan 27 15:46:29 compute-0 podman[201073]: time="2026-01-27T15:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:46:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:46:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4389 "" "Go-http-client/1.1"
Jan 27 15:46:29 compute-0 nova_compute[185191]: 2026-01-27 15:46:29.889 185195 DEBUG nova.compute.provider_tree [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:46:29 compute-0 nova_compute[185191]: 2026-01-27 15:46:29.913 185195 DEBUG nova.scheduler.client.report [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:46:29 compute-0 nova_compute[185191]: 2026-01-27 15:46:29.936 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:46:29 compute-0 nova_compute[185191]: 2026-01-27 15:46:29.937 185195 DEBUG nova.compute.manager [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 15:46:29 compute-0 nova_compute[185191]: 2026-01-27 15:46:29.992 185195 DEBUG nova.compute.manager [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 15:46:29 compute-0 nova_compute[185191]: 2026-01-27 15:46:29.993 185195 DEBUG nova.network.neutron [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.025 185195 INFO nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.066 185195 DEBUG nova.compute.manager [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.307 185195 DEBUG nova.compute.manager [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.308 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.308 185195 INFO nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Creating image(s)
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.309 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "/var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.310 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "/var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.310 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "/var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.324 185195 DEBUG nova.policy [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2f735dc3417d4dc1830a1081fe9a604b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '20f0077bc9bd475ebff1667438d2013e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.327 185195 DEBUG oslo_concurrency.processutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.385 185195 DEBUG oslo_concurrency.processutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.387 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "ff4242d69ca9d913650cdb12b85ccef3c5758f81" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.388 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "ff4242d69ca9d913650cdb12b85ccef3c5758f81" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.405 185195 DEBUG oslo_concurrency.processutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.468 185195 DEBUG oslo_concurrency.processutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.469 185195 DEBUG oslo_concurrency.processutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81,backing_fmt=raw /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.728 185195 DEBUG oslo_concurrency.processutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81,backing_fmt=raw /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk 1073741824" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.729 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "ff4242d69ca9d913650cdb12b85ccef3c5758f81" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.730 185195 DEBUG oslo_concurrency.processutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.783 185195 DEBUG oslo_concurrency.processutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff4242d69ca9d913650cdb12b85ccef3c5758f81 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.785 185195 DEBUG nova.virt.disk.api [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Checking if we can resize image /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.785 185195 DEBUG oslo_concurrency.processutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.844 185195 DEBUG oslo_concurrency.processutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.845 185195 DEBUG nova.virt.disk.api [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Cannot resize image /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.846 185195 DEBUG nova.objects.instance [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lazy-loading 'migration_context' on Instance uuid a0b14d34-73c5-426d-8d69-793643148639 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.889 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.889 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Ensure instance console log exists: /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.890 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.891 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:46:30 compute-0 nova_compute[185191]: 2026-01-27 15:46:30.891 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:46:31 compute-0 nova_compute[185191]: 2026-01-27 15:46:31.266 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:31.287 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:46:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:31.288 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:46:31 compute-0 nova_compute[185191]: 2026-01-27 15:46:31.289 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:31 compute-0 openstack_network_exporter[204239]: ERROR   15:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:46:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:46:31 compute-0 openstack_network_exporter[204239]: ERROR   15:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:46:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:46:31 compute-0 nova_compute[185191]: 2026-01-27 15:46:31.485 185195 DEBUG nova.network.neutron [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Successfully created port: d11ff881-6533-4499-87d1-ff504269c883 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 15:46:32 compute-0 nova_compute[185191]: 2026-01-27 15:46:32.303 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:32 compute-0 nova_compute[185191]: 2026-01-27 15:46:32.423 185195 DEBUG nova.network.neutron [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Successfully updated port: d11ff881-6533-4499-87d1-ff504269c883 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 15:46:32 compute-0 nova_compute[185191]: 2026-01-27 15:46:32.441 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:46:32 compute-0 nova_compute[185191]: 2026-01-27 15:46:32.442 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquired lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:46:32 compute-0 nova_compute[185191]: 2026-01-27 15:46:32.442 185195 DEBUG nova.network.neutron [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 15:46:32 compute-0 nova_compute[185191]: 2026-01-27 15:46:32.523 185195 DEBUG nova.compute.manager [req-46feb04b-9ada-4dab-a14c-3f9dcf4b4b91 req-15ebb261-6a14-4ee3-b601-fc3804c592c2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Received event network-changed-d11ff881-6533-4499-87d1-ff504269c883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:46:32 compute-0 nova_compute[185191]: 2026-01-27 15:46:32.524 185195 DEBUG nova.compute.manager [req-46feb04b-9ada-4dab-a14c-3f9dcf4b4b91 req-15ebb261-6a14-4ee3-b601-fc3804c592c2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Refreshing instance network info cache due to event network-changed-d11ff881-6533-4499-87d1-ff504269c883. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 15:46:32 compute-0 nova_compute[185191]: 2026-01-27 15:46:32.525 185195 DEBUG oslo_concurrency.lockutils [req-46feb04b-9ada-4dab-a14c-3f9dcf4b4b91 req-15ebb261-6a14-4ee3-b601-fc3804c592c2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:46:32 compute-0 nova_compute[185191]: 2026-01-27 15:46:32.562 185195 DEBUG nova.network.neutron [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 15:46:33 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:33.290 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.434 185195 DEBUG nova.network.neutron [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Updating instance_info_cache with network_info: [{"id": "d11ff881-6533-4499-87d1-ff504269c883", "address": "fa:16:3e:58:8f:27", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd11ff881-65", "ovs_interfaceid": "d11ff881-6533-4499-87d1-ff504269c883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.503 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Releasing lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.504 185195 DEBUG nova.compute.manager [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Instance network_info: |[{"id": "d11ff881-6533-4499-87d1-ff504269c883", "address": "fa:16:3e:58:8f:27", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd11ff881-65", "ovs_interfaceid": "d11ff881-6533-4499-87d1-ff504269c883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.504 185195 DEBUG oslo_concurrency.lockutils [req-46feb04b-9ada-4dab-a14c-3f9dcf4b4b91 req-15ebb261-6a14-4ee3-b601-fc3804c592c2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquired lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.505 185195 DEBUG nova.network.neutron [req-46feb04b-9ada-4dab-a14c-3f9dcf4b4b91 req-15ebb261-6a14-4ee3-b601-fc3804c592c2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Refreshing network info cache for port d11ff881-6533-4499-87d1-ff504269c883 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.508 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Start _get_guest_xml network_info=[{"id": "d11ff881-6533-4499-87d1-ff504269c883", "address": "fa:16:3e:58:8f:27", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd11ff881-65", "ovs_interfaceid": "d11ff881-6533-4499-87d1-ff504269c883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:41:26Z,direct_url=<?>,disk_format='qcow2',id=9d30f498-7a22-4c96-a758-84b2da277162,min_disk=0,min_ram=0,name='tempest-scenario-img--117615184',owner='20f0077bc9bd475ebff1667438d2013e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:41:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': '9d30f498-7a22-4c96-a758-84b2da277162'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.515 185195 WARNING nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.522 185195 DEBUG nova.virt.libvirt.host [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.523 185195 DEBUG nova.virt.libvirt.host [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.529 185195 DEBUG nova.virt.libvirt.host [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.530 185195 DEBUG nova.virt.libvirt.host [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.531 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.532 185195 DEBUG nova.virt.hardware [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T15:34:18Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='aed09843-3292-40b2-b829-c4ed118e135f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T15:41:26Z,direct_url=<?>,disk_format='qcow2',id=9d30f498-7a22-4c96-a758-84b2da277162,min_disk=0,min_ram=0,name='tempest-scenario-img--117615184',owner='20f0077bc9bd475ebff1667438d2013e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T15:41:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.533 185195 DEBUG nova.virt.hardware [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.533 185195 DEBUG nova.virt.hardware [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.534 185195 DEBUG nova.virt.hardware [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.535 185195 DEBUG nova.virt.hardware [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.535 185195 DEBUG nova.virt.hardware [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.536 185195 DEBUG nova.virt.hardware [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.536 185195 DEBUG nova.virt.hardware [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.537 185195 DEBUG nova.virt.hardware [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.537 185195 DEBUG nova.virt.hardware [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.538 185195 DEBUG nova.virt.hardware [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.542 185195 DEBUG nova.virt.libvirt.vif [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:46:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt',id=15,image_ref='9d30f498-7a22-4c96-a758-84b2da277162',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='b3308bb6-f54d-4153-86c0-fa8fa74a39af'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20f0077bc9bd475ebff1667438d2013e',ramdisk_id='',reservation_id='r-xovtw6fb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9d30f498-7a22-4c96-a758-84b2da277162',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-349502190',owner_user_name='tempest-PrometheusGabbiTest-349502190-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:46:30Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='2f735dc3417d4dc1830a1081fe9a604b',uuid=a0b14d34-73c5-426d-8d69-793643148639,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d11ff881-6533-4499-87d1-ff504269c883", "address": "fa:16:3e:58:8f:27", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd11ff881-65", "ovs_interfaceid": "d11ff881-6533-4499-87d1-ff504269c883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.544 185195 DEBUG nova.network.os_vif_util [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Converting VIF {"id": "d11ff881-6533-4499-87d1-ff504269c883", "address": "fa:16:3e:58:8f:27", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd11ff881-65", "ovs_interfaceid": "d11ff881-6533-4499-87d1-ff504269c883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.545 185195 DEBUG nova.network.os_vif_util [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:8f:27,bridge_name='br-int',has_traffic_filtering=True,id=d11ff881-6533-4499-87d1-ff504269c883,network=Network(583566c3-a7da-49ba-8c93-87be3496cb80),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd11ff881-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.546 185195 DEBUG nova.objects.instance [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lazy-loading 'pci_devices' on Instance uuid a0b14d34-73c5-426d-8d69-793643148639 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.584 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] End _get_guest_xml xml=<domain type="kvm">
Jan 27 15:46:33 compute-0 nova_compute[185191]:   <uuid>a0b14d34-73c5-426d-8d69-793643148639</uuid>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   <name>instance-0000000f</name>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   <memory>131072</memory>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   <vcpu>1</vcpu>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   <metadata>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <nova:name>te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt</nova:name>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <nova:creationTime>2026-01-27 15:46:33</nova:creationTime>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <nova:flavor name="m1.nano">
Jan 27 15:46:33 compute-0 nova_compute[185191]:         <nova:memory>128</nova:memory>
Jan 27 15:46:33 compute-0 nova_compute[185191]:         <nova:disk>1</nova:disk>
Jan 27 15:46:33 compute-0 nova_compute[185191]:         <nova:swap>0</nova:swap>
Jan 27 15:46:33 compute-0 nova_compute[185191]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 15:46:33 compute-0 nova_compute[185191]:         <nova:vcpus>1</nova:vcpus>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       </nova:flavor>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <nova:owner>
Jan 27 15:46:33 compute-0 nova_compute[185191]:         <nova:user uuid="2f735dc3417d4dc1830a1081fe9a604b">tempest-PrometheusGabbiTest-349502190-project-member</nova:user>
Jan 27 15:46:33 compute-0 nova_compute[185191]:         <nova:project uuid="20f0077bc9bd475ebff1667438d2013e">tempest-PrometheusGabbiTest-349502190</nova:project>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       </nova:owner>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <nova:root type="image" uuid="9d30f498-7a22-4c96-a758-84b2da277162"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <nova:ports>
Jan 27 15:46:33 compute-0 nova_compute[185191]:         <nova:port uuid="d11ff881-6533-4499-87d1-ff504269c883">
Jan 27 15:46:33 compute-0 nova_compute[185191]:           <nova:ip type="fixed" address="10.100.1.167" ipVersion="4"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:         </nova:port>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       </nova:ports>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     </nova:instance>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   </metadata>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   <sysinfo type="smbios">
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <system>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <entry name="manufacturer">RDO</entry>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <entry name="product">OpenStack Compute</entry>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <entry name="serial">a0b14d34-73c5-426d-8d69-793643148639</entry>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <entry name="uuid">a0b14d34-73c5-426d-8d69-793643148639</entry>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <entry name="family">Virtual Machine</entry>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     </system>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   </sysinfo>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   <os>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <boot dev="hd"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <smbios mode="sysinfo"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   </os>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   <features>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <acpi/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <apic/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <vmcoreinfo/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   </features>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   <clock offset="utc">
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <timer name="hpet" present="no"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   </clock>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   <cpu mode="host-model" match="exact">
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   </cpu>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   <devices>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <disk type="file" device="disk">
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <target dev="vda" bus="virtio"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <disk type="file" device="cdrom">
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <source file="/var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk.config"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <target dev="sda" bus="sata"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     </disk>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <interface type="ethernet">
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <mac address="fa:16:3e:58:8f:27"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <mtu size="1442"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <target dev="tapd11ff881-65"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     </interface>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <serial type="pty">
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <log file="/var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/console.log" append="off"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     </serial>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <video>
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <model type="virtio"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     </video>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <input type="tablet" bus="usb"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <rng model="virtio">
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <backend model="random">/dev/urandom</backend>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     </rng>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <controller type="usb" index="0"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     <memballoon model="virtio">
Jan 27 15:46:33 compute-0 nova_compute[185191]:       <stats period="10"/>
Jan 27 15:46:33 compute-0 nova_compute[185191]:     </memballoon>
Jan 27 15:46:33 compute-0 nova_compute[185191]:   </devices>
Jan 27 15:46:33 compute-0 nova_compute[185191]: </domain>
Jan 27 15:46:33 compute-0 nova_compute[185191]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.596 185195 DEBUG nova.compute.manager [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Preparing to wait for external event network-vif-plugged-d11ff881-6533-4499-87d1-ff504269c883 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.597 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "a0b14d34-73c5-426d-8d69-793643148639-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.597 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "a0b14d34-73c5-426d-8d69-793643148639-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.597 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "a0b14d34-73c5-426d-8d69-793643148639-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.598 185195 DEBUG nova.virt.libvirt.vif [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T15:46:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt',id=15,image_ref='9d30f498-7a22-4c96-a758-84b2da277162',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='b3308bb6-f54d-4153-86c0-fa8fa74a39af'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20f0077bc9bd475ebff1667438d2013e',ramdisk_id='',reservation_id='r-xovtw6fb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9d30f498-7a22-4c96-a758-84b2da277162',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-349502190',owner_user_name='tempest-PrometheusGabbiTest-349502190-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T15:46:30Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='2f735dc3417d4dc1830a1081fe9a604b',uuid=a0b14d34-73c5-426d-8d69-793643148639,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d11ff881-6533-4499-87d1-ff504269c883", "address": "fa:16:3e:58:8f:27", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd11ff881-65", "ovs_interfaceid": "d11ff881-6533-4499-87d1-ff504269c883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.598 185195 DEBUG nova.network.os_vif_util [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Converting VIF {"id": "d11ff881-6533-4499-87d1-ff504269c883", "address": "fa:16:3e:58:8f:27", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd11ff881-65", "ovs_interfaceid": "d11ff881-6533-4499-87d1-ff504269c883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.599 185195 DEBUG nova.network.os_vif_util [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:8f:27,bridge_name='br-int',has_traffic_filtering=True,id=d11ff881-6533-4499-87d1-ff504269c883,network=Network(583566c3-a7da-49ba-8c93-87be3496cb80),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd11ff881-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.599 185195 DEBUG os_vif [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:8f:27,bridge_name='br-int',has_traffic_filtering=True,id=d11ff881-6533-4499-87d1-ff504269c883,network=Network(583566c3-a7da-49ba-8c93-87be3496cb80),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd11ff881-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.600 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.600 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.601 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.606 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.606 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd11ff881-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.607 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd11ff881-65, col_values=(('external_ids', {'iface-id': 'd11ff881-6533-4499-87d1-ff504269c883', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:8f:27', 'vm-uuid': 'a0b14d34-73c5-426d-8d69-793643148639'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.609 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:33 compute-0 NetworkManager[56090]: <info>  [1769528793.6129] manager: (tapd11ff881-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.613 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.618 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.620 185195 INFO os_vif [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:8f:27,bridge_name='br-int',has_traffic_filtering=True,id=d11ff881-6533-4499-87d1-ff504269c883,network=Network(583566c3-a7da-49ba-8c93-87be3496cb80),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd11ff881-65')
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.726 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.727 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.727 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] No VIF found with MAC fa:16:3e:58:8f:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 15:46:33 compute-0 nova_compute[185191]: 2026-01-27 15:46:33.728 185195 INFO nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Using config drive
Jan 27 15:46:34 compute-0 nova_compute[185191]: 2026-01-27 15:46:34.117 185195 INFO nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Creating config drive at /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk.config
Jan 27 15:46:34 compute-0 nova_compute[185191]: 2026-01-27 15:46:34.124 185195 DEBUG oslo_concurrency.processutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjk24nmf7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:46:34 compute-0 nova_compute[185191]: 2026-01-27 15:46:34.254 185195 DEBUG oslo_concurrency.processutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjk24nmf7" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:46:34 compute-0 kernel: tapd11ff881-65: entered promiscuous mode
Jan 27 15:46:34 compute-0 ovn_controller[97541]: 2026-01-27T15:46:34Z|00171|binding|INFO|Claiming lport d11ff881-6533-4499-87d1-ff504269c883 for this chassis.
Jan 27 15:46:34 compute-0 ovn_controller[97541]: 2026-01-27T15:46:34Z|00172|binding|INFO|d11ff881-6533-4499-87d1-ff504269c883: Claiming fa:16:3e:58:8f:27 10.100.1.167
Jan 27 15:46:34 compute-0 nova_compute[185191]: 2026-01-27 15:46:34.333 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:34 compute-0 NetworkManager[56090]: <info>  [1769528794.3453] manager: (tapd11ff881-65): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Jan 27 15:46:34 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:34.345 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:8f:27 10.100.1.167'], port_security=['fa:16:3e:58:8f:27 10.100.1.167'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.167/16', 'neutron:device_id': 'a0b14d34-73c5-426d-8d69-793643148639', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-583566c3-a7da-49ba-8c93-87be3496cb80', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20f0077bc9bd475ebff1667438d2013e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0c775d39-0088-4183-837a-f310fb1cc533', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e677173-f8a0-4b87-8946-43d053c4a459, chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=d11ff881-6533-4499-87d1-ff504269c883) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:46:34 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:34.346 106793 INFO neutron.agent.ovn.metadata.agent [-] Port d11ff881-6533-4499-87d1-ff504269c883 in datapath 583566c3-a7da-49ba-8c93-87be3496cb80 bound to our chassis
Jan 27 15:46:34 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:34.347 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 583566c3-a7da-49ba-8c93-87be3496cb80
Jan 27 15:46:34 compute-0 ovn_controller[97541]: 2026-01-27T15:46:34Z|00173|binding|INFO|Setting lport d11ff881-6533-4499-87d1-ff504269c883 ovn-installed in OVS
Jan 27 15:46:34 compute-0 ovn_controller[97541]: 2026-01-27T15:46:34Z|00174|binding|INFO|Setting lport d11ff881-6533-4499-87d1-ff504269c883 up in Southbound
Jan 27 15:46:34 compute-0 nova_compute[185191]: 2026-01-27 15:46:34.359 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:34 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:34.371 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d66c9498-ae0a-4ddd-a5e2-75669d4d41b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:46:34 compute-0 systemd-udevd[255055]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 15:46:34 compute-0 systemd-machined[156506]: New machine qemu-16-instance-0000000f.
Jan 27 15:46:34 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Jan 27 15:46:34 compute-0 NetworkManager[56090]: <info>  [1769528794.4004] device (tapd11ff881-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 15:46:34 compute-0 NetworkManager[56090]: <info>  [1769528794.4049] device (tapd11ff881-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 15:46:34 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:34.409 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[719e86da-162b-497f-b571-0aa671e57869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:46:34 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:34.412 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[751879b6-0325-4d69-879c-0012df46a538]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:46:34 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:34.443 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[a27033ec-71ce-4446-b3b5-1114c44f412d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:46:34 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:34.461 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1802f0ef-8067-4a31-88db-ef4072612a38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap583566c3-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:b6:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607940, 'reachable_time': 40178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255062, 'error': None, 'target': 'ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:46:34 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:34.478 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[6340d5c9-8b7a-4f3c-b200-df465eb0be98]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap583566c3-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607951, 'tstamp': 607951}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255067, 'error': None, 'target': 'ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap583566c3-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607954, 'tstamp': 607954}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255067, 'error': None, 'target': 'ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:46:34 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:34.480 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap583566c3-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:46:34 compute-0 nova_compute[185191]: 2026-01-27 15:46:34.482 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:34 compute-0 nova_compute[185191]: 2026-01-27 15:46:34.483 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:34 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:34.486 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap583566c3-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:46:34 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:34.487 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:46:34 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:34.488 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap583566c3-a0, col_values=(('external_ids', {'iface-id': '1a1e49d2-439b-4887-8a67-bfa43f528ce6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:46:34 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:46:34.488 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:46:34 compute-0 nova_compute[185191]: 2026-01-27 15:46:34.654 185195 DEBUG nova.compute.manager [req-7460d207-ffba-422b-9472-88fb72fbc5fa req-3898a12b-01a2-419c-848a-ac34a9452ec3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Received event network-vif-plugged-d11ff881-6533-4499-87d1-ff504269c883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:46:34 compute-0 nova_compute[185191]: 2026-01-27 15:46:34.654 185195 DEBUG oslo_concurrency.lockutils [req-7460d207-ffba-422b-9472-88fb72fbc5fa req-3898a12b-01a2-419c-848a-ac34a9452ec3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "a0b14d34-73c5-426d-8d69-793643148639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:46:34 compute-0 nova_compute[185191]: 2026-01-27 15:46:34.655 185195 DEBUG oslo_concurrency.lockutils [req-7460d207-ffba-422b-9472-88fb72fbc5fa req-3898a12b-01a2-419c-848a-ac34a9452ec3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "a0b14d34-73c5-426d-8d69-793643148639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:46:34 compute-0 nova_compute[185191]: 2026-01-27 15:46:34.655 185195 DEBUG oslo_concurrency.lockutils [req-7460d207-ffba-422b-9472-88fb72fbc5fa req-3898a12b-01a2-419c-848a-ac34a9452ec3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "a0b14d34-73c5-426d-8d69-793643148639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:46:34 compute-0 nova_compute[185191]: 2026-01-27 15:46:34.655 185195 DEBUG nova.compute.manager [req-7460d207-ffba-422b-9472-88fb72fbc5fa req-3898a12b-01a2-419c-848a-ac34a9452ec3 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Processing event network-vif-plugged-d11ff881-6533-4499-87d1-ff504269c883 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 15:46:34 compute-0 nova_compute[185191]: 2026-01-27 15:46:34.896 185195 DEBUG nova.network.neutron [req-46feb04b-9ada-4dab-a14c-3f9dcf4b4b91 req-15ebb261-6a14-4ee3-b601-fc3804c592c2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Updated VIF entry in instance network info cache for port d11ff881-6533-4499-87d1-ff504269c883. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 15:46:34 compute-0 nova_compute[185191]: 2026-01-27 15:46:34.897 185195 DEBUG nova.network.neutron [req-46feb04b-9ada-4dab-a14c-3f9dcf4b4b91 req-15ebb261-6a14-4ee3-b601-fc3804c592c2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Updating instance_info_cache with network_info: [{"id": "d11ff881-6533-4499-87d1-ff504269c883", "address": "fa:16:3e:58:8f:27", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd11ff881-65", "ovs_interfaceid": "d11ff881-6533-4499-87d1-ff504269c883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:46:34 compute-0 nova_compute[185191]: 2026-01-27 15:46:34.931 185195 DEBUG oslo_concurrency.lockutils [req-46feb04b-9ada-4dab-a14c-3f9dcf4b4b91 req-15ebb261-6a14-4ee3-b601-fc3804c592c2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Releasing lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.226 185195 DEBUG nova.compute.manager [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.226 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528795.225517, a0b14d34-73c5-426d-8d69-793643148639 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.227 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] VM Started (Lifecycle Event)
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.232 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.237 185195 INFO nova.virt.libvirt.driver [-] [instance: a0b14d34-73c5-426d-8d69-793643148639] Instance spawned successfully.
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.237 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.251 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.258 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.262 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.263 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.263 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.263 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.264 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.264 185195 DEBUG nova.virt.libvirt.driver [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.302 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.303 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528795.225609, a0b14d34-73c5-426d-8d69-793643148639 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.303 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] VM Paused (Lifecycle Event)
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.341 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.346 185195 DEBUG nova.virt.driver [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] Emitting event <LifecycleEvent: 1769528795.2310507, a0b14d34-73c5-426d-8d69-793643148639 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.346 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] VM Resumed (Lifecycle Event)
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.356 185195 INFO nova.compute.manager [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Took 5.05 seconds to spawn the instance on the hypervisor.
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.357 185195 DEBUG nova.compute.manager [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.368 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.373 185195 DEBUG nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.404 185195 INFO nova.compute.manager [None req-69b28ccd-a3af-46ad-8d0b-16fc7e79d8d4 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.436 185195 INFO nova.compute.manager [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Took 5.75 seconds to build instance.
Jan 27 15:46:35 compute-0 nova_compute[185191]: 2026-01-27 15:46:35.454 185195 DEBUG oslo_concurrency.lockutils [None req-f2f27f88-4f24-4f5d-be40-b19e3ee465db 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "a0b14d34-73c5-426d-8d69-793643148639" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:46:36 compute-0 nova_compute[185191]: 2026-01-27 15:46:36.268 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:36 compute-0 podman[255076]: 2026-01-27 15:46:36.319814977 +0000 UTC m=+0.076096605 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 15:46:36 compute-0 nova_compute[185191]: 2026-01-27 15:46:36.938 185195 DEBUG nova.compute.manager [req-34c17cd4-5f92-4de1-9fb0-5b8cac997e0a req-87acdd47-7dbc-4deb-951d-1fb9cb860689 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Received event network-vif-plugged-d11ff881-6533-4499-87d1-ff504269c883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:46:36 compute-0 nova_compute[185191]: 2026-01-27 15:46:36.938 185195 DEBUG oslo_concurrency.lockutils [req-34c17cd4-5f92-4de1-9fb0-5b8cac997e0a req-87acdd47-7dbc-4deb-951d-1fb9cb860689 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "a0b14d34-73c5-426d-8d69-793643148639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:46:36 compute-0 nova_compute[185191]: 2026-01-27 15:46:36.939 185195 DEBUG oslo_concurrency.lockutils [req-34c17cd4-5f92-4de1-9fb0-5b8cac997e0a req-87acdd47-7dbc-4deb-951d-1fb9cb860689 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "a0b14d34-73c5-426d-8d69-793643148639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:46:36 compute-0 nova_compute[185191]: 2026-01-27 15:46:36.939 185195 DEBUG oslo_concurrency.lockutils [req-34c17cd4-5f92-4de1-9fb0-5b8cac997e0a req-87acdd47-7dbc-4deb-951d-1fb9cb860689 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "a0b14d34-73c5-426d-8d69-793643148639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:46:36 compute-0 nova_compute[185191]: 2026-01-27 15:46:36.939 185195 DEBUG nova.compute.manager [req-34c17cd4-5f92-4de1-9fb0-5b8cac997e0a req-87acdd47-7dbc-4deb-951d-1fb9cb860689 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] No waiting events found dispatching network-vif-plugged-d11ff881-6533-4499-87d1-ff504269c883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:46:36 compute-0 nova_compute[185191]: 2026-01-27 15:46:36.939 185195 WARNING nova.compute.manager [req-34c17cd4-5f92-4de1-9fb0-5b8cac997e0a req-87acdd47-7dbc-4deb-951d-1fb9cb860689 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Received unexpected event network-vif-plugged-d11ff881-6533-4499-87d1-ff504269c883 for instance with vm_state active and task_state None.
Jan 27 15:46:37 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 15:46:37 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 15:46:38 compute-0 nova_compute[185191]: 2026-01-27 15:46:38.612 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:39 compute-0 podman[255115]: 2026-01-27 15:46:39.319227964 +0000 UTC m=+0.066314552 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:46:39 compute-0 podman[255114]: 2026-01-27 15:46:39.326168081 +0000 UTC m=+0.075612452 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.tags=base rhel9, vcs-type=git)
Jan 27 15:46:41 compute-0 nova_compute[185191]: 2026-01-27 15:46:41.273 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:42 compute-0 podman[255160]: 2026-01-27 15:46:42.309424025 +0000 UTC m=+0.062056547 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:46:43 compute-0 nova_compute[185191]: 2026-01-27 15:46:43.617 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:44 compute-0 nova_compute[185191]: 2026-01-27 15:46:44.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:46:44 compute-0 nova_compute[185191]: 2026-01-27 15:46:44.983 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:46:44 compute-0 nova_compute[185191]: 2026-01-27 15:46:44.984 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:46:44 compute-0 nova_compute[185191]: 2026-01-27 15:46:44.984 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:46:44 compute-0 nova_compute[185191]: 2026-01-27 15:46:44.985 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.075 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.142 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.143 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.211 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.221 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.289 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.291 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.358 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.696 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.698 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5016MB free_disk=72.31307220458984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.698 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.699 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.772 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance f8fa4ecf-1446-421b-893d-f2b34f89da54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.773 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance a0b14d34-73c5-426d-8d69-793643148639 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.774 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.774 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.841 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.857 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.879 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:46:45 compute-0 nova_compute[185191]: 2026-01-27 15:46:45.879 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:46:46 compute-0 nova_compute[185191]: 2026-01-27 15:46:46.275 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:48 compute-0 nova_compute[185191]: 2026-01-27 15:46:48.622 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:51 compute-0 nova_compute[185191]: 2026-01-27 15:46:51.277 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:53 compute-0 nova_compute[185191]: 2026-01-27 15:46:53.627 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:53 compute-0 nova_compute[185191]: 2026-01-27 15:46:53.875 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:46:53 compute-0 nova_compute[185191]: 2026-01-27 15:46:53.915 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:46:54 compute-0 nova_compute[185191]: 2026-01-27 15:46:54.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:46:55 compute-0 podman[255198]: 2026-01-27 15:46:55.311507837 +0000 UTC m=+0.067762331 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 15:46:55 compute-0 nova_compute[185191]: 2026-01-27 15:46:55.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:46:55 compute-0 nova_compute[185191]: 2026-01-27 15:46:55.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:46:56 compute-0 nova_compute[185191]: 2026-01-27 15:46:56.279 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:57 compute-0 podman[255219]: 2026-01-27 15:46:57.32159906 +0000 UTC m=+0.074025619 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, managed_by=edpm_ansible, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc.)
Jan 27 15:46:57 compute-0 podman[255217]: 2026-01-27 15:46:57.327329064 +0000 UTC m=+0.085989840 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 27 15:46:57 compute-0 podman[255218]: 2026-01-27 15:46:57.369628809 +0000 UTC m=+0.124240547 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 27 15:46:58 compute-0 nova_compute[185191]: 2026-01-27 15:46:58.631 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:46:58 compute-0 nova_compute[185191]: 2026-01-27 15:46:58.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:46:58 compute-0 nova_compute[185191]: 2026-01-27 15:46:58.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:46:58 compute-0 nova_compute[185191]: 2026-01-27 15:46:58.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:46:59 compute-0 podman[201073]: time="2026-01-27T15:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:46:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:46:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4389 "" "Go-http-client/1.1"
Jan 27 15:47:00 compute-0 nova_compute[185191]: 2026-01-27 15:47:00.126 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:47:00 compute-0 nova_compute[185191]: 2026-01-27 15:47:00.126 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:47:00 compute-0 nova_compute[185191]: 2026-01-27 15:47:00.127 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:47:00 compute-0 nova_compute[185191]: 2026-01-27 15:47:00.127 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f8fa4ecf-1446-421b-893d-f2b34f89da54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:47:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:47:00.270 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:47:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:47:00.271 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:47:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:47:00.271 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:47:01 compute-0 nova_compute[185191]: 2026-01-27 15:47:01.282 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:01 compute-0 openstack_network_exporter[204239]: ERROR   15:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:47:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:47:01 compute-0 openstack_network_exporter[204239]: ERROR   15:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:47:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:47:03 compute-0 nova_compute[185191]: 2026-01-27 15:47:03.406 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updating instance_info_cache with network_info: [{"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:47:03 compute-0 nova_compute[185191]: 2026-01-27 15:47:03.428 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:47:03 compute-0 nova_compute[185191]: 2026-01-27 15:47:03.428 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:47:03 compute-0 nova_compute[185191]: 2026-01-27 15:47:03.429 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:47:03 compute-0 nova_compute[185191]: 2026-01-27 15:47:03.429 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:47:03 compute-0 nova_compute[185191]: 2026-01-27 15:47:03.430 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:47:03 compute-0 nova_compute[185191]: 2026-01-27 15:47:03.635 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:04 compute-0 ovn_controller[97541]: 2026-01-27T15:47:04Z|00175|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 27 15:47:06 compute-0 nova_compute[185191]: 2026-01-27 15:47:06.285 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:07 compute-0 podman[255282]: 2026-01-27 15:47:07.316533885 +0000 UTC m=+0.070114994 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 27 15:47:08 compute-0 nova_compute[185191]: 2026-01-27 15:47:08.640 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:10 compute-0 podman[255302]: 2026-01-27 15:47:10.318307226 +0000 UTC m=+0.068616894 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:47:10 compute-0 podman[255301]: 2026-01-27 15:47:10.319812206 +0000 UTC m=+0.074887832 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.component=ubi9-container, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, maintainer=Red Hat, Inc., config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=)
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.995 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.995 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:47:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:11.002 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f8fa4ecf-1446-421b-893d-f2b34f89da54', 'name': 'te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '9d30f498-7a22-4c96-a758-84b2da277162'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '20f0077bc9bd475ebff1667438d2013e', 'user_id': '2f735dc3417d4dc1830a1081fe9a604b', 'hostId': 'a1663c21f4c2586f65c0b6541b29f837c5b2bf25e66085f6a331b3fc', 'status': 'active', 'metadata': {'metering.server_group': 'b3308bb6-f54d-4153-86c0-fa8fa74a39af'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:47:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:11.004 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0b14d34-73c5-426d-8d69-793643148639 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 15:47:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:11.005 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0b14d34-73c5-426d-8d69-793643148639 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82c957adbc17ae7d91b95e243ef95edcae050b803dbf40e883e7549d3d32b40a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 15:47:11 compute-0 nova_compute[185191]: 2026-01-27 15:47:11.287 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:11 compute-0 nova_compute[185191]: 2026-01-27 15:47:11.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.155 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Tue, 27 Jan 2026 15:47:11 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-2a8939a7-aa49-416e-8584-637b849bf7e9 x-openstack-request-id: req-2a8939a7-aa49-416e-8584-637b849bf7e9 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.156 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "a0b14d34-73c5-426d-8d69-793643148639", "name": "te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt", "status": "ACTIVE", "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "user_id": "2f735dc3417d4dc1830a1081fe9a604b", "metadata": {"metering.server_group": "b3308bb6-f54d-4153-86c0-fa8fa74a39af"}, "hostId": "a1663c21f4c2586f65c0b6541b29f837c5b2bf25e66085f6a331b3fc", "image": {"id": "9d30f498-7a22-4c96-a758-84b2da277162", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/9d30f498-7a22-4c96-a758-84b2da277162"}]}, "flavor": {"id": "aed09843-3292-40b2-b829-c4ed118e135f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/aed09843-3292-40b2-b829-c4ed118e135f"}]}, "created": "2026-01-27T15:46:28Z", "updated": "2026-01-27T15:46:35Z", "addresses": {"": [{"version": 4, "addr": "10.100.1.167", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:58:8f:27"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/a0b14d34-73c5-426d-8d69-793643148639"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/a0b14d34-73c5-426d-8d69-793643148639"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-27T15:46:35.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.156 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0b14d34-73c5-426d-8d69-793643148639 used request id req-2a8939a7-aa49-416e-8584-637b849bf7e9 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.157 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a0b14d34-73c5-426d-8d69-793643148639', 'name': 'te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '9d30f498-7a22-4c96-a758-84b2da277162'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '20f0077bc9bd475ebff1667438d2013e', 'user_id': '2f735dc3417d4dc1830a1081fe9a604b', 'hostId': 'a1663c21f4c2586f65c0b6541b29f837c5b2bf25e66085f6a331b3fc', 'status': 'active', 'metadata': {'metering.server_group': 'b3308bb6-f54d-4153-86c0-fa8fa74a39af'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.158 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:47:13.158999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.197 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.latency volume: 3583670186 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.197 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.239 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.latency volume: 21766286034 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.240 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.240 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.240 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.240 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.240 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.240 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.241 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.241 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.requests volume: 317 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.241 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.241 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.requests volume: 193 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.241 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.242 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.242 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.242 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.242 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.242 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.242 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.245 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:47:13.241007) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:47:13.242481) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.259 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.260 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.274 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.allocation volume: 28319744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.275 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.275 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.275 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.275 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.275 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.276 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:47:13.275979) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.280 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.285 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for a0b14d34-73c5-426d-8d69-793643148639 / tapd11ff881-65 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.285 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.286 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.286 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.286 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.286 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.286 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.287 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.287 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.287 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.287 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.287 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:47:13.286677) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.287 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.288 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.288 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.288 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.288 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.289 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.289 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets volume: 11 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.289 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.packets volume: 8 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.289 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.289 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.289 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.290 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.290 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:47:13.287947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:47:13.288986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:47:13.290128) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 podman[255363]: 2026-01-27 15:47:13.308161766 +0000 UTC m=+0.060909046 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.317 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/cpu volume: 326240000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.339 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/cpu volume: 35960000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.340 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.340 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.340 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.340 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.340 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.341 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.341 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.342 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.342 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.342 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:47:13.340819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.343 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.343 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:47:13.342216) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.344 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.344 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.344 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/memory.usage volume: 43.58203125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.344 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/memory.usage volume: 40.46875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.344 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.345 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.345 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.345 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.345 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.345 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.bytes volume: 1436 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.345 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.bytes volume: 616 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.346 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.346 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.347 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:47:13.344151) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.347 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt>]
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.347 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.347 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.348 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.348 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.348 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.349 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.349 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.349 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.349 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.349 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.349 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.350 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.350 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.350 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.350 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.351 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:47:13.345475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.351 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.351 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.351 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T15:47:13.346895) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.352 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.352 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:47:13.348085) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.352 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.352 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.353 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.353 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.353 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.353 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.353 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.354 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.355 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.356 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.356 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.356 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.356 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:47:13.349689) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.357 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.357 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.357 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.358 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.358 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.358 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.358 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.358 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.bytes volume: 28929024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.358 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.359 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.bytes volume: 27356160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.359 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:47:13.351313) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.359 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.360 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.360 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.360 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.360 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.360 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.360 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.361 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.361 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.latency volume: 1006510145 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.361 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.latency volume: 65762611 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.362 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.latency volume: 2441881307 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.362 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.latency volume: 256067952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.362 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.363 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.363 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.363 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.363 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.363 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt>]
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.364 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.364 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.364 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.requests volume: 1037 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:47:13.353106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.364 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.365 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.requests volume: 937 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.365 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.365 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.366 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.366 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.366 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:47:13.356899) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.367 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.367 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.usage volume: 28246016 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:47:13.358369) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.367 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.368 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:47:13.360474) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.368 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.368 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.368 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.369 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.369 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.369 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.369 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.370 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.370 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.370 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.370 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.370 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.bytes volume: 72871936 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:47:13.361623) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.370 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.371 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.bytes volume: 25628672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.371 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.371 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.372 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T15:47:13.363422) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.373 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.373 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.373 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.373 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.373 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:47:13.364548) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.373 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.373 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.373 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.373 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.374 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:47:13.366777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:47:13.369003) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:47:13.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:47:13.370357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:47:13 compute-0 nova_compute[185191]: 2026-01-27 15:47:13.645 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:14 compute-0 ovn_controller[97541]: 2026-01-27T15:47:14Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:58:8f:27 10.100.1.167
Jan 27 15:47:14 compute-0 ovn_controller[97541]: 2026-01-27T15:47:14Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:58:8f:27 10.100.1.167
Jan 27 15:47:16 compute-0 nova_compute[185191]: 2026-01-27 15:47:16.293 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:18 compute-0 nova_compute[185191]: 2026-01-27 15:47:18.650 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:21 compute-0 nova_compute[185191]: 2026-01-27 15:47:21.296 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:23 compute-0 nova_compute[185191]: 2026-01-27 15:47:23.655 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:26 compute-0 nova_compute[185191]: 2026-01-27 15:47:26.298 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:26 compute-0 podman[255390]: 2026-01-27 15:47:26.336502053 +0000 UTC m=+0.092401602 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 15:47:28 compute-0 podman[255409]: 2026-01-27 15:47:28.30725418 +0000 UTC m=+0.066859817 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052)
Jan 27 15:47:28 compute-0 podman[255411]: 2026-01-27 15:47:28.344829799 +0000 UTC m=+0.094041377 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:47:28 compute-0 podman[255410]: 2026-01-27 15:47:28.358376842 +0000 UTC m=+0.113028136 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 15:47:28 compute-0 nova_compute[185191]: 2026-01-27 15:47:28.658 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:29 compute-0 podman[201073]: time="2026-01-27T15:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:47:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:47:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4392 "" "Go-http-client/1.1"
Jan 27 15:47:31 compute-0 nova_compute[185191]: 2026-01-27 15:47:31.300 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:31 compute-0 openstack_network_exporter[204239]: ERROR   15:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:47:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:47:31 compute-0 openstack_network_exporter[204239]: ERROR   15:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:47:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:47:33 compute-0 nova_compute[185191]: 2026-01-27 15:47:33.662 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:36 compute-0 nova_compute[185191]: 2026-01-27 15:47:36.303 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:38 compute-0 podman[255470]: 2026-01-27 15:47:38.31577015 +0000 UTC m=+0.074965983 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:47:38 compute-0 nova_compute[185191]: 2026-01-27 15:47:38.667 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:40 compute-0 sshd-session[255488]: Invalid user sol from 2.57.122.238 port 49458
Jan 27 15:47:40 compute-0 sshd-session[255488]: Connection closed by invalid user sol 2.57.122.238 port 49458 [preauth]
Jan 27 15:47:41 compute-0 nova_compute[185191]: 2026-01-27 15:47:41.305 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:41 compute-0 podman[255491]: 2026-01-27 15:47:41.313356579 +0000 UTC m=+0.069130987 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:47:41 compute-0 podman[255490]: 2026-01-27 15:47:41.31561356 +0000 UTC m=+0.075467427 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release-0.7.12=)
Jan 27 15:47:43 compute-0 nova_compute[185191]: 2026-01-27 15:47:43.670 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:44 compute-0 podman[255531]: 2026-01-27 15:47:44.301528265 +0000 UTC m=+0.061757399 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:47:44 compute-0 nova_compute[185191]: 2026-01-27 15:47:44.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:47:44 compute-0 nova_compute[185191]: 2026-01-27 15:47:44.994 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:47:44 compute-0 nova_compute[185191]: 2026-01-27 15:47:44.995 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:47:44 compute-0 nova_compute[185191]: 2026-01-27 15:47:44.995 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:47:44 compute-0 nova_compute[185191]: 2026-01-27 15:47:44.996 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:47:45 compute-0 nova_compute[185191]: 2026-01-27 15:47:45.112 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:47:45 compute-0 nova_compute[185191]: 2026-01-27 15:47:45.186 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:47:45 compute-0 nova_compute[185191]: 2026-01-27 15:47:45.187 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:47:45 compute-0 nova_compute[185191]: 2026-01-27 15:47:45.246 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:47:45 compute-0 nova_compute[185191]: 2026-01-27 15:47:45.252 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:47:45 compute-0 nova_compute[185191]: 2026-01-27 15:47:45.310 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:47:45 compute-0 nova_compute[185191]: 2026-01-27 15:47:45.311 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:47:45 compute-0 nova_compute[185191]: 2026-01-27 15:47:45.373 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:47:45 compute-0 nova_compute[185191]: 2026-01-27 15:47:45.679 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:47:45 compute-0 nova_compute[185191]: 2026-01-27 15:47:45.682 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4998MB free_disk=72.284912109375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:47:45 compute-0 nova_compute[185191]: 2026-01-27 15:47:45.682 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:47:45 compute-0 nova_compute[185191]: 2026-01-27 15:47:45.683 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:47:46 compute-0 nova_compute[185191]: 2026-01-27 15:47:46.220 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance f8fa4ecf-1446-421b-893d-f2b34f89da54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:47:46 compute-0 nova_compute[185191]: 2026-01-27 15:47:46.221 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance a0b14d34-73c5-426d-8d69-793643148639 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:47:46 compute-0 nova_compute[185191]: 2026-01-27 15:47:46.221 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:47:46 compute-0 nova_compute[185191]: 2026-01-27 15:47:46.221 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:47:46 compute-0 nova_compute[185191]: 2026-01-27 15:47:46.306 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:46 compute-0 nova_compute[185191]: 2026-01-27 15:47:46.368 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:47:46 compute-0 nova_compute[185191]: 2026-01-27 15:47:46.384 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:47:46 compute-0 nova_compute[185191]: 2026-01-27 15:47:46.386 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:47:46 compute-0 nova_compute[185191]: 2026-01-27 15:47:46.386 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:47:48 compute-0 nova_compute[185191]: 2026-01-27 15:47:48.673 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:51 compute-0 nova_compute[185191]: 2026-01-27 15:47:51.309 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:53 compute-0 nova_compute[185191]: 2026-01-27 15:47:53.676 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:55 compute-0 nova_compute[185191]: 2026-01-27 15:47:55.385 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:47:55 compute-0 nova_compute[185191]: 2026-01-27 15:47:55.386 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:47:55 compute-0 nova_compute[185191]: 2026-01-27 15:47:55.938 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:47:56 compute-0 nova_compute[185191]: 2026-01-27 15:47:56.312 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:56 compute-0 nova_compute[185191]: 2026-01-27 15:47:56.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:47:57 compute-0 podman[255571]: 2026-01-27 15:47:57.31004692 +0000 UTC m=+0.061355328 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 15:47:58 compute-0 nova_compute[185191]: 2026-01-27 15:47:58.679 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:47:58 compute-0 nova_compute[185191]: 2026-01-27 15:47:58.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:47:58 compute-0 nova_compute[185191]: 2026-01-27 15:47:58.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:47:59 compute-0 podman[255588]: 2026-01-27 15:47:59.324457061 +0000 UTC m=+0.073446253 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260126, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 27 15:47:59 compute-0 podman[255590]: 2026-01-27 15:47:59.355603388 +0000 UTC m=+0.095399613 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vcs-type=git, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_id=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container)
Jan 27 15:47:59 compute-0 podman[255589]: 2026-01-27 15:47:59.361930968 +0000 UTC m=+0.105033732 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 15:47:59 compute-0 podman[201073]: time="2026-01-27T15:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:47:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:47:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 27 15:48:00 compute-0 nova_compute[185191]: 2026-01-27 15:48:00.164 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:48:00 compute-0 nova_compute[185191]: 2026-01-27 15:48:00.165 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:48:00 compute-0 nova_compute[185191]: 2026-01-27 15:48:00.165 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:48:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:48:00.271 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:48:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:48:00.272 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:48:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:48:00.272 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:48:01 compute-0 nova_compute[185191]: 2026-01-27 15:48:01.314 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:01 compute-0 openstack_network_exporter[204239]: ERROR   15:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:48:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:48:01 compute-0 openstack_network_exporter[204239]: ERROR   15:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:48:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:48:02 compute-0 nova_compute[185191]: 2026-01-27 15:48:02.208 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Updating instance_info_cache with network_info: [{"id": "d11ff881-6533-4499-87d1-ff504269c883", "address": "fa:16:3e:58:8f:27", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd11ff881-65", "ovs_interfaceid": "d11ff881-6533-4499-87d1-ff504269c883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:48:02 compute-0 nova_compute[185191]: 2026-01-27 15:48:02.236 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:48:02 compute-0 nova_compute[185191]: 2026-01-27 15:48:02.237 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:48:02 compute-0 nova_compute[185191]: 2026-01-27 15:48:02.237 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:48:02 compute-0 nova_compute[185191]: 2026-01-27 15:48:02.237 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:48:02 compute-0 nova_compute[185191]: 2026-01-27 15:48:02.238 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:48:03 compute-0 sshd-session[255650]: Invalid user sol from 45.148.10.240 port 52958
Jan 27 15:48:03 compute-0 sshd-session[255650]: Connection closed by invalid user sol 45.148.10.240 port 52958 [preauth]
Jan 27 15:48:03 compute-0 nova_compute[185191]: 2026-01-27 15:48:03.682 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:06 compute-0 nova_compute[185191]: 2026-01-27 15:48:06.317 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:08 compute-0 nova_compute[185191]: 2026-01-27 15:48:08.685 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:09 compute-0 podman[255652]: 2026-01-27 15:48:09.31094484 +0000 UTC m=+0.064006200 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 15:48:11 compute-0 nova_compute[185191]: 2026-01-27 15:48:11.320 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:12 compute-0 podman[255672]: 2026-01-27 15:48:12.308487577 +0000 UTC m=+0.059518309 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:48:12 compute-0 podman[255671]: 2026-01-27 15:48:12.325553065 +0000 UTC m=+0.079534166 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.29.0, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1214.1726694543, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, version=9.4)
Jan 27 15:48:13 compute-0 nova_compute[185191]: 2026-01-27 15:48:13.688 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:13 compute-0 nova_compute[185191]: 2026-01-27 15:48:13.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:48:14 compute-0 podman[255713]: 2026-01-27 15:48:14.761480492 +0000 UTC m=+0.091487397 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:48:16 compute-0 nova_compute[185191]: 2026-01-27 15:48:16.322 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:18 compute-0 nova_compute[185191]: 2026-01-27 15:48:18.692 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:21 compute-0 nova_compute[185191]: 2026-01-27 15:48:21.324 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:23 compute-0 nova_compute[185191]: 2026-01-27 15:48:23.698 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:26 compute-0 nova_compute[185191]: 2026-01-27 15:48:26.325 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:28 compute-0 podman[255748]: 2026-01-27 15:48:28.312345739 +0000 UTC m=+0.052338446 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 15:48:28 compute-0 nova_compute[185191]: 2026-01-27 15:48:28.702 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:29 compute-0 podman[201073]: time="2026-01-27T15:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:48:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:48:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4394 "" "Go-http-client/1.1"
Jan 27 15:48:30 compute-0 podman[255767]: 2026-01-27 15:48:30.305249061 +0000 UTC m=+0.065281864 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 15:48:30 compute-0 podman[255769]: 2026-01-27 15:48:30.337544988 +0000 UTC m=+0.086600237 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, release=1755695350, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 15:48:30 compute-0 podman[255768]: 2026-01-27 15:48:30.3543896 +0000 UTC m=+0.110382745 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:48:31 compute-0 nova_compute[185191]: 2026-01-27 15:48:31.328 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:31 compute-0 openstack_network_exporter[204239]: ERROR   15:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:48:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:48:31 compute-0 openstack_network_exporter[204239]: ERROR   15:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:48:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:48:33 compute-0 nova_compute[185191]: 2026-01-27 15:48:33.708 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:36 compute-0 nova_compute[185191]: 2026-01-27 15:48:36.331 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:37 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 15:48:38 compute-0 nova_compute[185191]: 2026-01-27 15:48:38.712 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:40 compute-0 podman[255829]: 2026-01-27 15:48:40.359077468 +0000 UTC m=+0.115087041 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 15:48:41 compute-0 nova_compute[185191]: 2026-01-27 15:48:41.334 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:43 compute-0 podman[255849]: 2026-01-27 15:48:43.311556195 +0000 UTC m=+0.061096231 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:48:43 compute-0 podman[255848]: 2026-01-27 15:48:43.321193064 +0000 UTC m=+0.076637709 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, version=9.4, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release=1214.1726694543, container_name=kepler)
Jan 27 15:48:43 compute-0 nova_compute[185191]: 2026-01-27 15:48:43.717 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:44 compute-0 nova_compute[185191]: 2026-01-27 15:48:44.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.004 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.004 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.005 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.005 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.092 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.157 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.159 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.226 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.232 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.307 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.308 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:48:45 compute-0 podman[255894]: 2026-01-27 15:48:45.313080299 +0000 UTC m=+0.068169111 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.375 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.737 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.739 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5009MB free_disk=72.284912109375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.739 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.740 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.825 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance f8fa4ecf-1446-421b-893d-f2b34f89da54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.825 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance a0b14d34-73c5-426d-8d69-793643148639 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.826 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.826 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.881 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.900 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.902 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:48:45 compute-0 nova_compute[185191]: 2026-01-27 15:48:45.902 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:48:46 compute-0 nova_compute[185191]: 2026-01-27 15:48:46.336 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:48 compute-0 nova_compute[185191]: 2026-01-27 15:48:48.721 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:51 compute-0 nova_compute[185191]: 2026-01-27 15:48:51.338 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:53 compute-0 nova_compute[185191]: 2026-01-27 15:48:53.726 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:53 compute-0 nova_compute[185191]: 2026-01-27 15:48:53.898 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:48:53 compute-0 nova_compute[185191]: 2026-01-27 15:48:53.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:48:54 compute-0 nova_compute[185191]: 2026-01-27 15:48:54.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:48:55 compute-0 nova_compute[185191]: 2026-01-27 15:48:55.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:48:56 compute-0 nova_compute[185191]: 2026-01-27 15:48:56.341 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:57 compute-0 nova_compute[185191]: 2026-01-27 15:48:57.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:48:58 compute-0 nova_compute[185191]: 2026-01-27 15:48:58.731 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:48:59 compute-0 podman[255927]: 2026-01-27 15:48:59.309232391 +0000 UTC m=+0.066006693 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 15:48:59 compute-0 podman[201073]: time="2026-01-27T15:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:48:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:48:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4394 "" "Go-http-client/1.1"
Jan 27 15:49:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:49:00.272 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:49:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:49:00.272 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:49:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:49:00.273 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:49:00 compute-0 nova_compute[185191]: 2026-01-27 15:49:00.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:49:00 compute-0 nova_compute[185191]: 2026-01-27 15:49:00.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:49:00 compute-0 nova_compute[185191]: 2026-01-27 15:49:00.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:49:01 compute-0 nova_compute[185191]: 2026-01-27 15:49:01.204 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:49:01 compute-0 nova_compute[185191]: 2026-01-27 15:49:01.204 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:49:01 compute-0 nova_compute[185191]: 2026-01-27 15:49:01.205 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:49:01 compute-0 nova_compute[185191]: 2026-01-27 15:49:01.205 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f8fa4ecf-1446-421b-893d-f2b34f89da54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:49:01 compute-0 podman[255943]: 2026-01-27 15:49:01.312242504 +0000 UTC m=+0.069719273 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Jan 27 15:49:01 compute-0 podman[255945]: 2026-01-27 15:49:01.318298717 +0000 UTC m=+0.068062349 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1755695350, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 27 15:49:01 compute-0 nova_compute[185191]: 2026-01-27 15:49:01.342 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:01 compute-0 podman[255944]: 2026-01-27 15:49:01.3649601 +0000 UTC m=+0.110965871 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:49:01 compute-0 openstack_network_exporter[204239]: ERROR   15:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:49:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:49:01 compute-0 openstack_network_exporter[204239]: ERROR   15:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:49:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:49:03 compute-0 nova_compute[185191]: 2026-01-27 15:49:03.226 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updating instance_info_cache with network_info: [{"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:49:03 compute-0 nova_compute[185191]: 2026-01-27 15:49:03.241 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:49:03 compute-0 nova_compute[185191]: 2026-01-27 15:49:03.242 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:49:03 compute-0 nova_compute[185191]: 2026-01-27 15:49:03.242 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:49:03 compute-0 nova_compute[185191]: 2026-01-27 15:49:03.242 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:49:03 compute-0 nova_compute[185191]: 2026-01-27 15:49:03.734 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:03 compute-0 nova_compute[185191]: 2026-01-27 15:49:03.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:49:06 compute-0 nova_compute[185191]: 2026-01-27 15:49:06.346 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:08 compute-0 nova_compute[185191]: 2026-01-27 15:49:08.738 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:10.995 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:49:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:10.996 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:49:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.006 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.006 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.006 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.007 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.007 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.004 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f8fa4ecf-1446-421b-893d-f2b34f89da54', 'name': 'te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '9d30f498-7a22-4c96-a758-84b2da277162'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '20f0077bc9bd475ebff1667438d2013e', 'user_id': '2f735dc3417d4dc1830a1081fe9a604b', 'hostId': 'a1663c21f4c2586f65c0b6541b29f837c5b2bf25e66085f6a331b3fc', 'status': 'active', 'metadata': {'metering.server_group': 'b3308bb6-f54d-4153-86c0-fa8fa74a39af'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.008 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.011 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a0b14d34-73c5-426d-8d69-793643148639', 'name': 'te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '9d30f498-7a22-4c96-a758-84b2da277162'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '20f0077bc9bd475ebff1667438d2013e', 'user_id': '2f735dc3417d4dc1830a1081fe9a604b', 'hostId': 'a1663c21f4c2586f65c0b6541b29f837c5b2bf25e66085f6a331b3fc', 'status': 'active', 'metadata': {'metering.server_group': 'b3308bb6-f54d-4153-86c0-fa8fa74a39af'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.011 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.012 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:49:11.011955) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.050 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.latency volume: 3634122906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.050 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.083 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.latency volume: 21999814227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.084 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.084 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.085 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.085 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.085 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.085 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.requests volume: 342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.085 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.085 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.requests volume: 281 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.086 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:49:11.085282) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.086 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.086 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.086 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.086 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:49:11.086918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.099 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.099 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.111 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.112 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.112 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.112 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.112 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.113 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.113 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:49:11.113189) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.117 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.120 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.121 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.121 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.121 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:49:11.121828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.122 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.122 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.122 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.122 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.123 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.123 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.123 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.124 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:49:11.122966) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.124 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:49:11.123981) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.124 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.124 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.125 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:49:11.125434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.148 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/cpu volume: 335840000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.172 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/cpu volume: 153050000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.173 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.173 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.173 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.173 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.173 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.173 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.173 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.174 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.174 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:49:11.173757) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.174 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.175 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.175 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.175 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.175 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.175 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.175 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.176 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:49:11.175344) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.176 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.176 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.176 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.176 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.176 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.177 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/memory.usage volume: 42.33203125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.177 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/memory.usage volume: 47.39453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.177 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.177 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.178 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.178 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.178 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.178 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.178 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.179 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:49:11.176907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.179 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.179 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:49:11.178168) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.179 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.179 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.179 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:49:11.179425) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.179 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.180 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.180 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.180 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.180 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.180 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.180 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.180 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.bytes.delta volume: 1360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.181 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.181 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:49:11.180509) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.181 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.181 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.181 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.182 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.182 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.182 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.182 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.182 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.183 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:49:11.181684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.183 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:49:11.182739) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.183 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.183 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.184 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.184 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.184 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.184 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.184 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.184 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.184 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.184 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.185 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.185 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.185 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.185 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.185 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.bytes volume: 30153216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:49:11.184216) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.185 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:49:11.185319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.185 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.bytes volume: 29244416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.186 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.186 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.186 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.186 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.186 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.186 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.186 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.186 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.187 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.bytes.delta volume: 1530 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.187 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.187 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.187 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.187 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.187 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.187 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.188 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.latency volume: 1063775781 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:49:11.186787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.188 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.latency volume: 107780486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:49:11.187939) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.188 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.latency volume: 2503579943 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.188 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.latency volume: 256067952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.188 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.189 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.189 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.189 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.189 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.189 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.requests volume: 1083 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.189 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.190 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:49:11.189504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.190 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.requests volume: 1041 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.190 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.190 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.190 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.190 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.190 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.190 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.190 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.191 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.191 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.191 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:49:11.190940) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.191 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.192 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.192 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.192 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.192 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.192 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.192 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.193 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.193 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.193 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.193 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.193 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.bytes volume: 73179136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.193 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:49:11.192349) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.194 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:49:11.193446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.194 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.194 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:49:11.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:49:11 compute-0 podman[256008]: 2026-01-27 15:49:11.314831117 +0000 UTC m=+0.072559530 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Jan 27 15:49:11 compute-0 nova_compute[185191]: 2026-01-27 15:49:11.348 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:13 compute-0 nova_compute[185191]: 2026-01-27 15:49:13.741 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:14 compute-0 podman[256029]: 2026-01-27 15:49:14.307246816 +0000 UTC m=+0.059540879 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:49:14 compute-0 podman[256028]: 2026-01-27 15:49:14.339995216 +0000 UTC m=+0.097505230 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, version=9.4, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, config_id=kepler, name=ubi9)
Jan 27 15:49:14 compute-0 nova_compute[185191]: 2026-01-27 15:49:14.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:49:16 compute-0 podman[256069]: 2026-01-27 15:49:16.344636442 +0000 UTC m=+0.085533187 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:49:16 compute-0 nova_compute[185191]: 2026-01-27 15:49:16.350 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:18 compute-0 nova_compute[185191]: 2026-01-27 15:49:18.745 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:21 compute-0 nova_compute[185191]: 2026-01-27 15:49:21.354 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:23 compute-0 nova_compute[185191]: 2026-01-27 15:49:23.749 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:26 compute-0 nova_compute[185191]: 2026-01-27 15:49:26.355 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:28 compute-0 nova_compute[185191]: 2026-01-27 15:49:28.753 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:29 compute-0 podman[201073]: time="2026-01-27T15:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:49:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:49:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4395 "" "Go-http-client/1.1"
Jan 27 15:49:30 compute-0 podman[256093]: 2026-01-27 15:49:30.315510456 +0000 UTC m=+0.068473620 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 15:49:31 compute-0 nova_compute[185191]: 2026-01-27 15:49:31.357 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:31 compute-0 openstack_network_exporter[204239]: ERROR   15:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:49:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:49:31 compute-0 openstack_network_exporter[204239]: ERROR   15:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:49:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:49:32 compute-0 podman[256111]: 2026-01-27 15:49:32.3248684 +0000 UTC m=+0.073685400 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, version=9.6, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:49:32 compute-0 podman[256109]: 2026-01-27 15:49:32.330030729 +0000 UTC m=+0.086644538 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 15:49:32 compute-0 podman[256110]: 2026-01-27 15:49:32.376679091 +0000 UTC m=+0.128583833 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 27 15:49:33 compute-0 nova_compute[185191]: 2026-01-27 15:49:33.756 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:36 compute-0 nova_compute[185191]: 2026-01-27 15:49:36.359 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:38 compute-0 nova_compute[185191]: 2026-01-27 15:49:38.760 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:41 compute-0 nova_compute[185191]: 2026-01-27 15:49:41.361 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:42 compute-0 podman[256175]: 2026-01-27 15:49:42.299100619 +0000 UTC m=+0.059557590 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 27 15:49:43 compute-0 nova_compute[185191]: 2026-01-27 15:49:43.764 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:44 compute-0 podman[256194]: 2026-01-27 15:49:44.74483582 +0000 UTC m=+0.067109753 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, version=9.4, release-0.7.12=, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=kepler, io.openshift.tags=base rhel9, release=1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, container_name=kepler, io.openshift.expose-services=)
Jan 27 15:49:44 compute-0 podman[256195]: 2026-01-27 15:49:44.746367301 +0000 UTC m=+0.063755963 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:49:44 compute-0 nova_compute[185191]: 2026-01-27 15:49:44.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:49:44 compute-0 nova_compute[185191]: 2026-01-27 15:49:44.975 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:49:44 compute-0 nova_compute[185191]: 2026-01-27 15:49:44.976 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:49:44 compute-0 nova_compute[185191]: 2026-01-27 15:49:44.976 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:49:44 compute-0 nova_compute[185191]: 2026-01-27 15:49:44.976 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.050 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.112 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.113 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.175 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.183 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.238 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.239 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.298 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.638 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.640 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5007MB free_disk=72.28493118286133GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.641 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.642 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.722 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance f8fa4ecf-1446-421b-893d-f2b34f89da54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.723 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance a0b14d34-73c5-426d-8d69-793643148639 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.723 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.723 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.784 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.806 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.808 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:49:45 compute-0 nova_compute[185191]: 2026-01-27 15:49:45.808 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:49:46 compute-0 nova_compute[185191]: 2026-01-27 15:49:46.362 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:47 compute-0 podman[256248]: 2026-01-27 15:49:47.308478199 +0000 UTC m=+0.064083152 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:49:48 compute-0 sshd-session[256273]: Invalid user sol from 2.57.122.238 port 35068
Jan 27 15:49:48 compute-0 nova_compute[185191]: 2026-01-27 15:49:48.769 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:48 compute-0 sshd-session[256273]: Connection closed by invalid user sol 2.57.122.238 port 35068 [preauth]
Jan 27 15:49:51 compute-0 nova_compute[185191]: 2026-01-27 15:49:51.365 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:53 compute-0 nova_compute[185191]: 2026-01-27 15:49:53.774 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:55 compute-0 nova_compute[185191]: 2026-01-27 15:49:55.809 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:49:55 compute-0 nova_compute[185191]: 2026-01-27 15:49:55.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:49:56 compute-0 nova_compute[185191]: 2026-01-27 15:49:56.367 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:56 compute-0 nova_compute[185191]: 2026-01-27 15:49:56.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:49:57 compute-0 nova_compute[185191]: 2026-01-27 15:49:57.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:49:58 compute-0 nova_compute[185191]: 2026-01-27 15:49:58.779 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:49:59 compute-0 podman[201073]: time="2026-01-27T15:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:49:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:49:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4391 "" "Go-http-client/1.1"
Jan 27 15:50:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:50:00.273 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:50:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:50:00.274 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:50:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:50:00.275 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:50:01 compute-0 podman[256276]: 2026-01-27 15:50:01.328403181 +0000 UTC m=+0.085462095 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 15:50:01 compute-0 nova_compute[185191]: 2026-01-27 15:50:01.369 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:01 compute-0 openstack_network_exporter[204239]: ERROR   15:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:50:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:50:01 compute-0 openstack_network_exporter[204239]: ERROR   15:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:50:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:50:01 compute-0 nova_compute[185191]: 2026-01-27 15:50:01.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:50:01 compute-0 nova_compute[185191]: 2026-01-27 15:50:01.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:50:02 compute-0 nova_compute[185191]: 2026-01-27 15:50:02.265 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:50:02 compute-0 nova_compute[185191]: 2026-01-27 15:50:02.266 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:50:02 compute-0 nova_compute[185191]: 2026-01-27 15:50:02.266 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:50:03 compute-0 podman[256295]: 2026-01-27 15:50:03.329969325 +0000 UTC m=+0.073197796 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126)
Jan 27 15:50:03 compute-0 podman[256297]: 2026-01-27 15:50:03.378591141 +0000 UTC m=+0.107808036 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, config_id=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public)
Jan 27 15:50:03 compute-0 podman[256296]: 2026-01-27 15:50:03.409326776 +0000 UTC m=+0.142570939 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:50:03 compute-0 nova_compute[185191]: 2026-01-27 15:50:03.782 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:04 compute-0 nova_compute[185191]: 2026-01-27 15:50:04.215 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Updating instance_info_cache with network_info: [{"id": "d11ff881-6533-4499-87d1-ff504269c883", "address": "fa:16:3e:58:8f:27", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd11ff881-65", "ovs_interfaceid": "d11ff881-6533-4499-87d1-ff504269c883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:50:04 compute-0 nova_compute[185191]: 2026-01-27 15:50:04.233 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:50:04 compute-0 nova_compute[185191]: 2026-01-27 15:50:04.233 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:50:04 compute-0 nova_compute[185191]: 2026-01-27 15:50:04.234 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:50:04 compute-0 nova_compute[185191]: 2026-01-27 15:50:04.235 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:50:04 compute-0 nova_compute[185191]: 2026-01-27 15:50:04.235 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:50:06 compute-0 nova_compute[185191]: 2026-01-27 15:50:06.371 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:08 compute-0 nova_compute[185191]: 2026-01-27 15:50:08.788 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:11 compute-0 nova_compute[185191]: 2026-01-27 15:50:11.373 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:13 compute-0 podman[256358]: 2026-01-27 15:50:13.30886371 +0000 UTC m=+0.067310739 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Jan 27 15:50:13 compute-0 nova_compute[185191]: 2026-01-27 15:50:13.793 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:14 compute-0 nova_compute[185191]: 2026-01-27 15:50:14.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:50:15 compute-0 podman[256377]: 2026-01-27 15:50:15.317592817 +0000 UTC m=+0.072187549 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., version=9.4, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git)
Jan 27 15:50:15 compute-0 podman[256378]: 2026-01-27 15:50:15.342805744 +0000 UTC m=+0.089924896 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:50:16 compute-0 nova_compute[185191]: 2026-01-27 15:50:16.375 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:18 compute-0 podman[256419]: 2026-01-27 15:50:18.305888486 +0000 UTC m=+0.063272930 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:50:18 compute-0 nova_compute[185191]: 2026-01-27 15:50:18.796 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:21 compute-0 nova_compute[185191]: 2026-01-27 15:50:21.378 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:22 compute-0 sshd-session[256442]: Invalid user sol from 45.148.10.240 port 35130
Jan 27 15:50:22 compute-0 sshd-session[256442]: Connection closed by invalid user sol 45.148.10.240 port 35130 [preauth]
Jan 27 15:50:23 compute-0 nova_compute[185191]: 2026-01-27 15:50:23.798 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:26 compute-0 nova_compute[185191]: 2026-01-27 15:50:26.380 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:28 compute-0 nova_compute[185191]: 2026-01-27 15:50:28.801 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:29 compute-0 podman[201073]: time="2026-01-27T15:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:50:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:50:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4391 "" "Go-http-client/1.1"
Jan 27 15:50:31 compute-0 nova_compute[185191]: 2026-01-27 15:50:31.382 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:31 compute-0 openstack_network_exporter[204239]: ERROR   15:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:50:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:50:31 compute-0 openstack_network_exporter[204239]: ERROR   15:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:50:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:50:32 compute-0 podman[256444]: 2026-01-27 15:50:32.296981984 +0000 UTC m=+0.053521398 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:50:33 compute-0 nova_compute[185191]: 2026-01-27 15:50:33.805 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:34 compute-0 podman[256465]: 2026-01-27 15:50:34.332098899 +0000 UTC m=+0.074295086 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., version=9.6, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 15:50:34 compute-0 podman[256463]: 2026-01-27 15:50:34.335013667 +0000 UTC m=+0.081031187 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:50:34 compute-0 podman[256464]: 2026-01-27 15:50:34.365721021 +0000 UTC m=+0.110200040 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 15:50:36 compute-0 nova_compute[185191]: 2026-01-27 15:50:36.385 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:38 compute-0 nova_compute[185191]: 2026-01-27 15:50:38.807 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:41 compute-0 nova_compute[185191]: 2026-01-27 15:50:41.387 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:43 compute-0 nova_compute[185191]: 2026-01-27 15:50:43.810 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:44 compute-0 podman[256524]: 2026-01-27 15:50:44.311898308 +0000 UTC m=+0.065737266 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 15:50:44 compute-0 nova_compute[185191]: 2026-01-27 15:50:44.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:50:44 compute-0 nova_compute[185191]: 2026-01-27 15:50:44.975 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:50:44 compute-0 nova_compute[185191]: 2026-01-27 15:50:44.976 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:50:44 compute-0 nova_compute[185191]: 2026-01-27 15:50:44.976 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:50:44 compute-0 nova_compute[185191]: 2026-01-27 15:50:44.977 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.061 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.122 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.123 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.185 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.191 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.253 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.254 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.320 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.627 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.628 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5003MB free_disk=72.284912109375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.629 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.630 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.787 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance f8fa4ecf-1446-421b-893d-f2b34f89da54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.788 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance a0b14d34-73c5-426d-8d69-793643148639 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.788 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.789 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.955 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.971 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.973 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:50:45 compute-0 nova_compute[185191]: 2026-01-27 15:50:45.974 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:50:46 compute-0 podman[256555]: 2026-01-27 15:50:46.305895699 +0000 UTC m=+0.056122268 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:50:46 compute-0 podman[256554]: 2026-01-27 15:50:46.330204052 +0000 UTC m=+0.082288291 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.4, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, name=ubi9, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release=1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:50:46 compute-0 nova_compute[185191]: 2026-01-27 15:50:46.389 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:48 compute-0 nova_compute[185191]: 2026-01-27 15:50:48.813 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:49 compute-0 podman[256597]: 2026-01-27 15:50:49.300765325 +0000 UTC m=+0.055560613 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:50:49 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 15:50:49 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 15:50:49 compute-0 nova_compute[185191]: 2026-01-27 15:50:49.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:50:49 compute-0 nova_compute[185191]: 2026-01-27 15:50:49.946 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 15:50:51 compute-0 nova_compute[185191]: 2026-01-27 15:50:51.391 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:53 compute-0 nova_compute[185191]: 2026-01-27 15:50:53.816 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:54 compute-0 nova_compute[185191]: 2026-01-27 15:50:54.956 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:50:55 compute-0 nova_compute[185191]: 2026-01-27 15:50:55.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:50:56 compute-0 nova_compute[185191]: 2026-01-27 15:50:56.394 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:56 compute-0 nova_compute[185191]: 2026-01-27 15:50:56.938 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:50:57 compute-0 nova_compute[185191]: 2026-01-27 15:50:57.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:50:57 compute-0 nova_compute[185191]: 2026-01-27 15:50:57.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:50:58 compute-0 nova_compute[185191]: 2026-01-27 15:50:58.818 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:50:59 compute-0 podman[201073]: time="2026-01-27T15:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:50:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:50:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4393 "" "Go-http-client/1.1"
Jan 27 15:50:59 compute-0 nova_compute[185191]: 2026-01-27 15:50:59.961 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:51:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:51:00.274 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:51:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:51:00.275 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:51:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:51:00.275 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:51:01 compute-0 nova_compute[185191]: 2026-01-27 15:51:01.396 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:01 compute-0 openstack_network_exporter[204239]: ERROR   15:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:51:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:51:01 compute-0 openstack_network_exporter[204239]: ERROR   15:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:51:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:51:01 compute-0 nova_compute[185191]: 2026-01-27 15:51:01.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:51:01 compute-0 nova_compute[185191]: 2026-01-27 15:51:01.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:51:01 compute-0 nova_compute[185191]: 2026-01-27 15:51:01.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:51:02 compute-0 nova_compute[185191]: 2026-01-27 15:51:02.312 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:51:02 compute-0 nova_compute[185191]: 2026-01-27 15:51:02.312 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:51:02 compute-0 nova_compute[185191]: 2026-01-27 15:51:02.313 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:51:02 compute-0 nova_compute[185191]: 2026-01-27 15:51:02.313 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f8fa4ecf-1446-421b-893d-f2b34f89da54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:51:03 compute-0 podman[256641]: 2026-01-27 15:51:03.299104777 +0000 UTC m=+0.059070957 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 15:51:03 compute-0 nova_compute[185191]: 2026-01-27 15:51:03.487 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updating instance_info_cache with network_info: [{"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:51:03 compute-0 nova_compute[185191]: 2026-01-27 15:51:03.502 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:51:03 compute-0 nova_compute[185191]: 2026-01-27 15:51:03.503 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:51:03 compute-0 nova_compute[185191]: 2026-01-27 15:51:03.503 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:51:03 compute-0 nova_compute[185191]: 2026-01-27 15:51:03.503 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:51:03 compute-0 nova_compute[185191]: 2026-01-27 15:51:03.823 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:03 compute-0 nova_compute[185191]: 2026-01-27 15:51:03.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:51:05 compute-0 podman[256660]: 2026-01-27 15:51:05.326248978 +0000 UTC m=+0.073753022 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 27 15:51:05 compute-0 podman[256662]: 2026-01-27 15:51:05.337039267 +0000 UTC m=+0.077526702 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, name=ubi9-minimal, vcs-type=git, version=9.6, distribution-scope=public, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 27 15:51:05 compute-0 podman[256661]: 2026-01-27 15:51:05.399641458 +0000 UTC m=+0.144952953 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 15:51:06 compute-0 nova_compute[185191]: 2026-01-27 15:51:06.398 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:08 compute-0 nova_compute[185191]: 2026-01-27 15:51:08.827 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:08 compute-0 nova_compute[185191]: 2026-01-27 15:51:08.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:51:08 compute-0 nova_compute[185191]: 2026-01-27 15:51:08.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 15:51:08 compute-0 nova_compute[185191]: 2026-01-27 15:51:08.961 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.996 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.996 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.996 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.002 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f8fa4ecf-1446-421b-893d-f2b34f89da54', 'name': 'te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '9d30f498-7a22-4c96-a758-84b2da277162'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '20f0077bc9bd475ebff1667438d2013e', 'user_id': '2f735dc3417d4dc1830a1081fe9a604b', 'hostId': 'a1663c21f4c2586f65c0b6541b29f837c5b2bf25e66085f6a331b3fc', 'status': 'active', 'metadata': {'metering.server_group': 'b3308bb6-f54d-4153-86c0-fa8fa74a39af'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.005 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a0b14d34-73c5-426d-8d69-793643148639', 'name': 'te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '9d30f498-7a22-4c96-a758-84b2da277162'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '20f0077bc9bd475ebff1667438d2013e', 'user_id': '2f735dc3417d4dc1830a1081fe9a604b', 'hostId': 'a1663c21f4c2586f65c0b6541b29f837c5b2bf25e66085f6a331b3fc', 'status': 'active', 'metadata': {'metering.server_group': 'b3308bb6-f54d-4153-86c0-fa8fa74a39af'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.005 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.005 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.005 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.006 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:51:11.005938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.056 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.latency volume: 3634122906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.057 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.092 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.latency volume: 21999814227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.093 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.094 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.094 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.094 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.094 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.094 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.094 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.requests volume: 342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.095 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.095 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.requests volume: 281 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.095 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.096 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.096 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.096 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.096 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.096 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.097 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:51:11.094723) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.097 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:51:11.096590) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.109 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.109 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.123 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.123 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.124 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.124 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.124 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.124 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.124 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:51:11.124531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.128 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.132 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.132 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.132 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.133 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:51:11.132975) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.133 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.133 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.133 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.134 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.134 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.134 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.134 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.135 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.135 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.135 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.135 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:51:11.134112) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.135 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.136 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.136 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.136 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.136 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.136 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.136 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:51:11.135049) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.136 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:51:11.136316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.155 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/cpu volume: 337000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.175 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/cpu volume: 272770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.176 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.176 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.176 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.176 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.176 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.176 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.176 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.177 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.177 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.177 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.177 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.177 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.177 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.178 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.178 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:51:11.176606) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.179 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/memory.usage volume: 42.33203125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.179 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/memory.usage volume: 47.39453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.179 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.180 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.180 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.180 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.180 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.180 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.180 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:51:11.177860) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.180 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.181 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.181 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.181 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.182 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.182 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.182 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.182 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.182 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.182 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.183 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.183 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.183 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.183 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.183 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.183 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.183 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.184 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.184 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.184 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.184 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.184 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.184 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.184 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.184 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.185 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.185 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.184 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:51:11.179146) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.185 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.185 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.185 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:51:11.180435) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.186 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.186 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.186 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.186 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.186 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.186 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.186 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.186 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:51:11.182017) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.187 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.187 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.187 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.187 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.187 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.187 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.187 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.bytes volume: 30153216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.187 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.187 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.bytes volume: 29244416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.188 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.187 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:51:11.183048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.188 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.188 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.188 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.188 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.189 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:51:11.184020) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.189 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.189 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.189 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.189 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.189 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.latency volume: 1063775781 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.190 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.latency volume: 107780486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.190 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.latency volume: 2503579943 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.190 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.latency volume: 256067952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.190 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.190 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.190 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.191 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.191 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.191 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.requests volume: 1083 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.191 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.191 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.requests volume: 1041 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.192 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.192 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.192 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.192 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.192 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.192 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.193 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.193 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.193 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.193 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.194 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.194 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.194 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.194 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.194 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.194 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:51:11.185047) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.195 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.195 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.195 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.196 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.196 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.bytes volume: 73179136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.196 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:51:11.186496) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.196 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.196 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:51:11.187429) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.197 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:51:11.188848) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.198 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.198 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.198 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.198 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.198 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.198 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.198 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.198 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.198 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.198 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.198 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.198 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.198 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.198 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.198 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.198 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.199 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.199 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:51:11.189811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:51:11.191351) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:51:11.192789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.200 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:51:11.194399) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:51:11.200 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:51:11.196033) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:51:11 compute-0 nova_compute[185191]: 2026-01-27 15:51:11.400 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:13 compute-0 nova_compute[185191]: 2026-01-27 15:51:13.834 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:14 compute-0 podman[256724]: 2026-01-27 15:51:14.771264037 +0000 UTC m=+0.082743003 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 27 15:51:14 compute-0 nova_compute[185191]: 2026-01-27 15:51:14.961 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:51:16 compute-0 nova_compute[185191]: 2026-01-27 15:51:16.402 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:17 compute-0 podman[256743]: 2026-01-27 15:51:17.306777158 +0000 UTC m=+0.067272947 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, version=9.4, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.openshift.tags=base rhel9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, config_id=kepler, name=ubi9, container_name=kepler, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc.)
Jan 27 15:51:17 compute-0 podman[256744]: 2026-01-27 15:51:17.3072408 +0000 UTC m=+0.063995419 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:51:18 compute-0 nova_compute[185191]: 2026-01-27 15:51:18.837 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:20 compute-0 podman[256790]: 2026-01-27 15:51:20.298628832 +0000 UTC m=+0.054923745 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:51:21 compute-0 nova_compute[185191]: 2026-01-27 15:51:21.404 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:23 compute-0 nova_compute[185191]: 2026-01-27 15:51:23.839 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:26 compute-0 nova_compute[185191]: 2026-01-27 15:51:26.406 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:28 compute-0 nova_compute[185191]: 2026-01-27 15:51:28.841 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:29 compute-0 podman[201073]: time="2026-01-27T15:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:51:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:51:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 27 15:51:31 compute-0 nova_compute[185191]: 2026-01-27 15:51:31.408 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:31 compute-0 openstack_network_exporter[204239]: ERROR   15:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:51:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:51:31 compute-0 openstack_network_exporter[204239]: ERROR   15:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:51:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:51:33 compute-0 nova_compute[185191]: 2026-01-27 15:51:33.844 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:34 compute-0 podman[256814]: 2026-01-27 15:51:34.315202494 +0000 UTC m=+0.063466255 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 15:51:36 compute-0 podman[256831]: 2026-01-27 15:51:36.328057841 +0000 UTC m=+0.077806590 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true)
Jan 27 15:51:36 compute-0 podman[256833]: 2026-01-27 15:51:36.328336949 +0000 UTC m=+0.071453340 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Jan 27 15:51:36 compute-0 podman[256832]: 2026-01-27 15:51:36.360362639 +0000 UTC m=+0.107028065 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 15:51:36 compute-0 nova_compute[185191]: 2026-01-27 15:51:36.409 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:38 compute-0 nova_compute[185191]: 2026-01-27 15:51:38.845 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:41 compute-0 nova_compute[185191]: 2026-01-27 15:51:41.411 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:43 compute-0 nova_compute[185191]: 2026-01-27 15:51:43.850 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:45 compute-0 podman[256895]: 2026-01-27 15:51:45.315437903 +0000 UTC m=+0.065143159 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:51:45 compute-0 nova_compute[185191]: 2026-01-27 15:51:45.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:51:45 compute-0 nova_compute[185191]: 2026-01-27 15:51:45.984 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:51:45 compute-0 nova_compute[185191]: 2026-01-27 15:51:45.984 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:51:45 compute-0 nova_compute[185191]: 2026-01-27 15:51:45.985 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:51:45 compute-0 nova_compute[185191]: 2026-01-27 15:51:45.985 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.064 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.139 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.140 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.201 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.210 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.270 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.271 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.333 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.413 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.678 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.680 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4999MB free_disk=72.281982421875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.681 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.682 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.765 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance f8fa4ecf-1446-421b-893d-f2b34f89da54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.766 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance a0b14d34-73c5-426d-8d69-793643148639 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.766 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.767 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.783 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing inventories for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.801 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating ProviderTree inventory for provider dbf037fd-3291-487b-ae9c-69178dae2ebc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.802 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.825 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing aggregate associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.845 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing trait associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, traits: HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_ACCELERATORS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.906 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.931 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.944 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:51:46 compute-0 nova_compute[185191]: 2026-01-27 15:51:46.944 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:51:48 compute-0 podman[256928]: 2026-01-27 15:51:48.314117512 +0000 UTC m=+0.059972591 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:51:48 compute-0 podman[256927]: 2026-01-27 15:51:48.319893457 +0000 UTC m=+0.072299032 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, com.redhat.component=ubi9-container, version=9.4, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 27 15:51:48 compute-0 nova_compute[185191]: 2026-01-27 15:51:48.853 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:51 compute-0 podman[256971]: 2026-01-27 15:51:51.308517265 +0000 UTC m=+0.068408988 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:51:51 compute-0 nova_compute[185191]: 2026-01-27 15:51:51.415 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:53 compute-0 nova_compute[185191]: 2026-01-27 15:51:53.855 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:56 compute-0 nova_compute[185191]: 2026-01-27 15:51:56.417 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:56 compute-0 nova_compute[185191]: 2026-01-27 15:51:56.946 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:51:56 compute-0 nova_compute[185191]: 2026-01-27 15:51:56.946 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:51:57 compute-0 nova_compute[185191]: 2026-01-27 15:51:57.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:51:58 compute-0 nova_compute[185191]: 2026-01-27 15:51:58.859 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:51:59 compute-0 podman[201073]: time="2026-01-27T15:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:51:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:51:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 27 15:52:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:52:00.276 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:52:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:52:00.276 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:52:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:52:00.277 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:52:01 compute-0 nova_compute[185191]: 2026-01-27 15:52:01.418 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:01 compute-0 openstack_network_exporter[204239]: ERROR   15:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:52:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:52:01 compute-0 openstack_network_exporter[204239]: ERROR   15:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:52:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:52:01 compute-0 nova_compute[185191]: 2026-01-27 15:52:01.741 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:52:01 compute-0 nova_compute[185191]: 2026-01-27 15:52:01.769 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Triggering sync for uuid f8fa4ecf-1446-421b-893d-f2b34f89da54 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 15:52:01 compute-0 nova_compute[185191]: 2026-01-27 15:52:01.770 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Triggering sync for uuid a0b14d34-73c5-426d-8d69-793643148639 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 15:52:01 compute-0 nova_compute[185191]: 2026-01-27 15:52:01.771 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "f8fa4ecf-1446-421b-893d-f2b34f89da54" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:52:01 compute-0 nova_compute[185191]: 2026-01-27 15:52:01.771 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:52:01 compute-0 nova_compute[185191]: 2026-01-27 15:52:01.772 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "a0b14d34-73c5-426d-8d69-793643148639" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:52:01 compute-0 nova_compute[185191]: 2026-01-27 15:52:01.772 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "a0b14d34-73c5-426d-8d69-793643148639" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:52:01 compute-0 nova_compute[185191]: 2026-01-27 15:52:01.818 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:52:01 compute-0 nova_compute[185191]: 2026-01-27 15:52:01.821 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "a0b14d34-73c5-426d-8d69-793643148639" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:52:01 compute-0 nova_compute[185191]: 2026-01-27 15:52:01.974 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:52:02 compute-0 nova_compute[185191]: 2026-01-27 15:52:02.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:52:02 compute-0 nova_compute[185191]: 2026-01-27 15:52:02.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:52:03 compute-0 nova_compute[185191]: 2026-01-27 15:52:03.400 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:52:03 compute-0 nova_compute[185191]: 2026-01-27 15:52:03.400 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:52:03 compute-0 nova_compute[185191]: 2026-01-27 15:52:03.401 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:52:03 compute-0 nova_compute[185191]: 2026-01-27 15:52:03.862 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:05 compute-0 podman[256994]: 2026-01-27 15:52:05.321238613 +0000 UTC m=+0.082725093 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 15:52:06 compute-0 nova_compute[185191]: 2026-01-27 15:52:06.416 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Updating instance_info_cache with network_info: [{"id": "d11ff881-6533-4499-87d1-ff504269c883", "address": "fa:16:3e:58:8f:27", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd11ff881-65", "ovs_interfaceid": "d11ff881-6533-4499-87d1-ff504269c883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:52:06 compute-0 nova_compute[185191]: 2026-01-27 15:52:06.420 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:06 compute-0 nova_compute[185191]: 2026-01-27 15:52:06.437 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:52:06 compute-0 nova_compute[185191]: 2026-01-27 15:52:06.438 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:52:06 compute-0 nova_compute[185191]: 2026-01-27 15:52:06.438 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:52:06 compute-0 nova_compute[185191]: 2026-01-27 15:52:06.439 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:52:06 compute-0 nova_compute[185191]: 2026-01-27 15:52:06.439 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:52:07 compute-0 podman[257013]: 2026-01-27 15:52:07.312545801 +0000 UTC m=+0.067497543 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:52:07 compute-0 podman[257015]: 2026-01-27 15:52:07.331323555 +0000 UTC m=+0.075109268 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:52:07 compute-0 podman[257014]: 2026-01-27 15:52:07.379944261 +0000 UTC m=+0.126687203 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 27 15:52:08 compute-0 nova_compute[185191]: 2026-01-27 15:52:08.865 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:11 compute-0 nova_compute[185191]: 2026-01-27 15:52:11.422 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:13 compute-0 nova_compute[185191]: 2026-01-27 15:52:13.869 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:15 compute-0 nova_compute[185191]: 2026-01-27 15:52:15.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:52:16 compute-0 podman[257081]: 2026-01-27 15:52:16.31577875 +0000 UTC m=+0.065223233 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 15:52:16 compute-0 nova_compute[185191]: 2026-01-27 15:52:16.424 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:17 compute-0 sshd-session[257082]: Invalid user sol from 2.57.122.238 port 48382
Jan 27 15:52:17 compute-0 sshd-session[257082]: Connection closed by invalid user sol 2.57.122.238 port 48382 [preauth]
Jan 27 15:52:18 compute-0 nova_compute[185191]: 2026-01-27 15:52:18.874 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:19 compute-0 podman[257104]: 2026-01-27 15:52:19.31898871 +0000 UTC m=+0.062782487 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:52:19 compute-0 podman[257103]: 2026-01-27 15:52:19.353001573 +0000 UTC m=+0.102108083 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, architecture=x86_64, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543)
Jan 27 15:52:21 compute-0 nova_compute[185191]: 2026-01-27 15:52:21.426 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:22 compute-0 podman[257143]: 2026-01-27 15:52:22.312603375 +0000 UTC m=+0.069312192 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:52:23 compute-0 nova_compute[185191]: 2026-01-27 15:52:23.876 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:26 compute-0 nova_compute[185191]: 2026-01-27 15:52:26.427 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:28 compute-0 nova_compute[185191]: 2026-01-27 15:52:28.880 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:29 compute-0 podman[201073]: time="2026-01-27T15:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:52:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:52:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4395 "" "Go-http-client/1.1"
Jan 27 15:52:31 compute-0 openstack_network_exporter[204239]: ERROR   15:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:52:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:52:31 compute-0 openstack_network_exporter[204239]: ERROR   15:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:52:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:52:31 compute-0 nova_compute[185191]: 2026-01-27 15:52:31.429 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:33 compute-0 nova_compute[185191]: 2026-01-27 15:52:33.883 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:36 compute-0 podman[257169]: 2026-01-27 15:52:36.310585345 +0000 UTC m=+0.067514334 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 27 15:52:36 compute-0 nova_compute[185191]: 2026-01-27 15:52:36.429 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:38 compute-0 podman[257189]: 2026-01-27 15:52:38.321376809 +0000 UTC m=+0.077622096 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:52:38 compute-0 podman[257191]: 2026-01-27 15:52:38.328380767 +0000 UTC m=+0.076367672 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.openshift.expose-services=, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 15:52:38 compute-0 podman[257190]: 2026-01-27 15:52:38.350439839 +0000 UTC m=+0.102167434 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 27 15:52:38 compute-0 nova_compute[185191]: 2026-01-27 15:52:38.889 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:41 compute-0 nova_compute[185191]: 2026-01-27 15:52:41.432 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:43 compute-0 sshd-session[257255]: Invalid user solana from 45.148.10.240 port 36006
Jan 27 15:52:43 compute-0 sshd-session[257255]: Connection closed by invalid user solana 45.148.10.240 port 36006 [preauth]
Jan 27 15:52:43 compute-0 nova_compute[185191]: 2026-01-27 15:52:43.898 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:46 compute-0 nova_compute[185191]: 2026-01-27 15:52:46.433 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:47 compute-0 podman[257257]: 2026-01-27 15:52:47.329596419 +0000 UTC m=+0.078362555 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 15:52:47 compute-0 nova_compute[185191]: 2026-01-27 15:52:47.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:52:47 compute-0 nova_compute[185191]: 2026-01-27 15:52:47.977 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:52:47 compute-0 nova_compute[185191]: 2026-01-27 15:52:47.977 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:52:47 compute-0 nova_compute[185191]: 2026-01-27 15:52:47.977 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:52:47 compute-0 nova_compute[185191]: 2026-01-27 15:52:47.978 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.060 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.125 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.126 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.191 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.199 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.262 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.263 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.326 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.640 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.641 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4946MB free_disk=72.28202819824219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.641 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.642 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.742 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance f8fa4ecf-1446-421b-893d-f2b34f89da54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.742 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance a0b14d34-73c5-426d-8d69-793643148639 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.743 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.743 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.798 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.814 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.816 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.816 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:52:48 compute-0 nova_compute[185191]: 2026-01-27 15:52:48.902 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:49 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 15:52:49 compute-0 podman[257290]: 2026-01-27 15:52:49.645477594 +0000 UTC m=+0.077600114 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:52:49 compute-0 podman[257289]: 2026-01-27 15:52:49.648822664 +0000 UTC m=+0.078068987 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.component=ubi9-container, release=1214.1726694543, distribution-scope=public, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release-0.7.12=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Jan 27 15:52:51 compute-0 nova_compute[185191]: 2026-01-27 15:52:51.436 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:53 compute-0 podman[257332]: 2026-01-27 15:52:53.30855532 +0000 UTC m=+0.062867829 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:52:53 compute-0 nova_compute[185191]: 2026-01-27 15:52:53.907 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:56 compute-0 nova_compute[185191]: 2026-01-27 15:52:56.438 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:56 compute-0 nova_compute[185191]: 2026-01-27 15:52:56.812 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:52:56 compute-0 nova_compute[185191]: 2026-01-27 15:52:56.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:52:58 compute-0 nova_compute[185191]: 2026-01-27 15:52:58.910 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:52:58 compute-0 nova_compute[185191]: 2026-01-27 15:52:58.938 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:52:58 compute-0 nova_compute[185191]: 2026-01-27 15:52:58.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:52:59 compute-0 podman[201073]: time="2026-01-27T15:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:52:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:52:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4390 "" "Go-http-client/1.1"
Jan 27 15:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:53:00.277 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:53:00.277 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:53:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:53:00.278 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:53:01 compute-0 openstack_network_exporter[204239]: ERROR   15:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:53:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:53:01 compute-0 openstack_network_exporter[204239]: ERROR   15:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:53:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:53:01 compute-0 nova_compute[185191]: 2026-01-27 15:53:01.440 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:02 compute-0 nova_compute[185191]: 2026-01-27 15:53:02.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:53:02 compute-0 nova_compute[185191]: 2026-01-27 15:53:02.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:53:02 compute-0 nova_compute[185191]: 2026-01-27 15:53:02.946 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:53:03 compute-0 nova_compute[185191]: 2026-01-27 15:53:03.410 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:53:03 compute-0 nova_compute[185191]: 2026-01-27 15:53:03.411 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:53:03 compute-0 nova_compute[185191]: 2026-01-27 15:53:03.411 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:53:03 compute-0 nova_compute[185191]: 2026-01-27 15:53:03.411 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f8fa4ecf-1446-421b-893d-f2b34f89da54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:53:03 compute-0 nova_compute[185191]: 2026-01-27 15:53:03.915 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:04 compute-0 nova_compute[185191]: 2026-01-27 15:53:04.666 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updating instance_info_cache with network_info: [{"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:53:04 compute-0 nova_compute[185191]: 2026-01-27 15:53:04.683 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:53:04 compute-0 nova_compute[185191]: 2026-01-27 15:53:04.683 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:53:04 compute-0 nova_compute[185191]: 2026-01-27 15:53:04.684 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:53:04 compute-0 nova_compute[185191]: 2026-01-27 15:53:04.684 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:53:04 compute-0 nova_compute[185191]: 2026-01-27 15:53:04.684 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:53:06 compute-0 nova_compute[185191]: 2026-01-27 15:53:06.442 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:07 compute-0 podman[257356]: 2026-01-27 15:53:07.349056159 +0000 UTC m=+0.095908616 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Jan 27 15:53:07 compute-0 nova_compute[185191]: 2026-01-27 15:53:07.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:53:08 compute-0 nova_compute[185191]: 2026-01-27 15:53:08.917 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:09 compute-0 podman[257376]: 2026-01-27 15:53:09.327296787 +0000 UTC m=+0.072667122 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64, config_id=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.6, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git)
Jan 27 15:53:09 compute-0 podman[257374]: 2026-01-27 15:53:09.351016824 +0000 UTC m=+0.102676528 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:53:09 compute-0 podman[257375]: 2026-01-27 15:53:09.361574907 +0000 UTC m=+0.106430438 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 15:53:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:10.996 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:53:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:10.996 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:53:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:10.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:10.998 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.006 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.006 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.002 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f8fa4ecf-1446-421b-893d-f2b34f89da54', 'name': 'te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '9d30f498-7a22-4c96-a758-84b2da277162'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '20f0077bc9bd475ebff1667438d2013e', 'user_id': '2f735dc3417d4dc1830a1081fe9a604b', 'hostId': 'a1663c21f4c2586f65c0b6541b29f837c5b2bf25e66085f6a331b3fc', 'status': 'active', 'metadata': {'metering.server_group': 'b3308bb6-f54d-4153-86c0-fa8fa74a39af'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.007 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.010 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a0b14d34-73c5-426d-8d69-793643148639', 'name': 'te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '9d30f498-7a22-4c96-a758-84b2da277162'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '20f0077bc9bd475ebff1667438d2013e', 'user_id': '2f735dc3417d4dc1830a1081fe9a604b', 'hostId': 'a1663c21f4c2586f65c0b6541b29f837c5b2bf25e66085f6a331b3fc', 'status': 'active', 'metadata': {'metering.server_group': 'b3308bb6-f54d-4153-86c0-fa8fa74a39af'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.010 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.010 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:53:11.011099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.051 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.latency volume: 3634122906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.052 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.093 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.latency volume: 22077236750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.094 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.095 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.095 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.096 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.096 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.096 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.097 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.requests volume: 342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.097 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:53:11.096619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.097 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.097 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.requests volume: 306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.098 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.099 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.099 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.100 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.100 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.100 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.101 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:53:11.100794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.117 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.118 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.132 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.132 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.133 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.133 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.134 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.134 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:53:11.134553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.138 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.141 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.142 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.142 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.142 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.142 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.143 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.143 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.143 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:53:11.143415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.144 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.144 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.144 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.144 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.144 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.145 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.145 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:53:11.145069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.145 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.145 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.146 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.146 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.146 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.146 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.147 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.147 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.147 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.147 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:53:11.146299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.147 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.148 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:53:11.147865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.169 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/cpu volume: 338170000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.190 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/cpu volume: 337420000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.191 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.191 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.192 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.192 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:53:11.191862) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.192 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.192 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.192 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.193 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.193 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.193 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.193 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.193 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:53:11.193073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.194 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.194 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.194 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.194 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.194 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/memory.usage volume: 42.33203125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.195 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/memory.usage volume: 46.2578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:53:11.194592) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.195 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.195 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.196 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.196 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.196 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.196 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:53:11.196277) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.196 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.197 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.197 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.197 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.197 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.197 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.197 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.198 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:53:11.197819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.198 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.198 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.198 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.198 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.198 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.198 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.199 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.199 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.199 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:53:11.198956) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.199 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.200 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.200 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.200 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.200 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.200 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:53:11.200139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.200 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.200 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.201 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.201 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.201 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.201 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:53:11.201267) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.201 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.201 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.201 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.202 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.202 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.202 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.202 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.202 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.203 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.203 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.203 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.203 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.203 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:53:11.202780) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.203 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.204 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.204 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.bytes volume: 30153216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.204 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.204 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.bytes volume: 30468608 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.204 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:53:11.204008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.205 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.205 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.205 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.205 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.205 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.205 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.205 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.206 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.206 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.206 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.206 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.206 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.206 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:53:11.205541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.206 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.206 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.latency volume: 1063775781 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:53:11.206854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.207 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.latency volume: 107780486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.207 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.latency volume: 2534265495 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.207 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.latency volume: 265016544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.207 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.208 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.208 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.208 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.208 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.208 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.208 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.208 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.208 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.requests volume: 1083 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.208 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:53:11.208527) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.209 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.209 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.requests volume: 1087 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.209 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.210 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.210 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.210 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.210 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.210 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.211 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.211 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.211 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:53:11.211058) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.212 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.212 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.213 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.213 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.213 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.213 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.213 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.213 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:53:11.213900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.214 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.215 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.215 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.215 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.215 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.216 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.216 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.216 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.216 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.bytes volume: 73179136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.216 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:53:11.216210) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.216 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.217 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.217 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.217 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:53:11.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:53:11 compute-0 nova_compute[185191]: 2026-01-27 15:53:11.445 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:13 compute-0 nova_compute[185191]: 2026-01-27 15:53:13.922 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:16 compute-0 nova_compute[185191]: 2026-01-27 15:53:16.449 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:16 compute-0 nova_compute[185191]: 2026-01-27 15:53:16.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:53:18 compute-0 podman[257439]: 2026-01-27 15:53:18.31839412 +0000 UTC m=+0.074454839 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 27 15:53:18 compute-0 nova_compute[185191]: 2026-01-27 15:53:18.927 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:20 compute-0 podman[257460]: 2026-01-27 15:53:20.301627913 +0000 UTC m=+0.058543193 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:53:20 compute-0 podman[257459]: 2026-01-27 15:53:20.339321165 +0000 UTC m=+0.100310514 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, version=9.4, build-date=2024-09-18T21:23:30, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.buildah.version=1.29.0)
Jan 27 15:53:21 compute-0 nova_compute[185191]: 2026-01-27 15:53:21.450 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:23 compute-0 nova_compute[185191]: 2026-01-27 15:53:23.929 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:24 compute-0 podman[257501]: 2026-01-27 15:53:24.329017874 +0000 UTC m=+0.080512503 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:53:26 compute-0 nova_compute[185191]: 2026-01-27 15:53:26.454 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:28 compute-0 nova_compute[185191]: 2026-01-27 15:53:28.932 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:29 compute-0 podman[201073]: time="2026-01-27T15:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:53:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:53:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4388 "" "Go-http-client/1.1"
Jan 27 15:53:31 compute-0 openstack_network_exporter[204239]: ERROR   15:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:53:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:53:31 compute-0 openstack_network_exporter[204239]: ERROR   15:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:53:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:53:31 compute-0 nova_compute[185191]: 2026-01-27 15:53:31.455 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:33 compute-0 nova_compute[185191]: 2026-01-27 15:53:33.936 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:36 compute-0 nova_compute[185191]: 2026-01-27 15:53:36.457 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:38 compute-0 podman[257523]: 2026-01-27 15:53:38.389562659 +0000 UTC m=+0.118388940 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:53:38 compute-0 nova_compute[185191]: 2026-01-27 15:53:38.940 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:40 compute-0 podman[257541]: 2026-01-27 15:53:40.311491606 +0000 UTC m=+0.062364775 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 27 15:53:40 compute-0 podman[257543]: 2026-01-27 15:53:40.322848921 +0000 UTC m=+0.065652534 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:53:40 compute-0 podman[257542]: 2026-01-27 15:53:40.353071283 +0000 UTC m=+0.100604373 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 27 15:53:41 compute-0 nova_compute[185191]: 2026-01-27 15:53:41.459 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:43 compute-0 nova_compute[185191]: 2026-01-27 15:53:43.944 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:46 compute-0 nova_compute[185191]: 2026-01-27 15:53:46.461 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:48 compute-0 nova_compute[185191]: 2026-01-27 15:53:48.950 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:49 compute-0 podman[257601]: 2026-01-27 15:53:49.339121387 +0000 UTC m=+0.086521864 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:53:49 compute-0 nova_compute[185191]: 2026-01-27 15:53:49.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:53:49 compute-0 nova_compute[185191]: 2026-01-27 15:53:49.977 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:53:49 compute-0 nova_compute[185191]: 2026-01-27 15:53:49.977 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:53:49 compute-0 nova_compute[185191]: 2026-01-27 15:53:49.977 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:53:49 compute-0 nova_compute[185191]: 2026-01-27 15:53:49.978 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.059 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.127 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.128 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.188 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.194 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.258 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.259 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.328 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.658 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.660 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4927MB free_disk=72.28202819824219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.660 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.661 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.749 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance f8fa4ecf-1446-421b-893d-f2b34f89da54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.750 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance a0b14d34-73c5-426d-8d69-793643148639 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.750 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.751 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.801 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.823 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.825 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:53:50 compute-0 nova_compute[185191]: 2026-01-27 15:53:50.826 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:53:51 compute-0 podman[257634]: 2026-01-27 15:53:51.309969197 +0000 UTC m=+0.069216180 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.tags=base rhel9, container_name=kepler, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, build-date=2024-09-18T21:23:30, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.29.0)
Jan 27 15:53:51 compute-0 podman[257635]: 2026-01-27 15:53:51.331565737 +0000 UTC m=+0.087929793 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 15:53:51 compute-0 nova_compute[185191]: 2026-01-27 15:53:51.463 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:53 compute-0 nova_compute[185191]: 2026-01-27 15:53:53.954 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:55 compute-0 podman[257679]: 2026-01-27 15:53:55.329947052 +0000 UTC m=+0.071752268 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:53:56 compute-0 nova_compute[185191]: 2026-01-27 15:53:56.465 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:58 compute-0 nova_compute[185191]: 2026-01-27 15:53:58.958 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:53:59 compute-0 podman[201073]: time="2026-01-27T15:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:53:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:53:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4396 "" "Go-http-client/1.1"
Jan 27 15:53:59 compute-0 nova_compute[185191]: 2026-01-27 15:53:59.827 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:53:59 compute-0 nova_compute[185191]: 2026-01-27 15:53:59.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:54:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:54:00.278 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:54:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:54:00.279 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:54:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:54:00.280 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:54:00 compute-0 nova_compute[185191]: 2026-01-27 15:54:00.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:54:01 compute-0 openstack_network_exporter[204239]: ERROR   15:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:54:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:54:01 compute-0 openstack_network_exporter[204239]: ERROR   15:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:54:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:54:01 compute-0 nova_compute[185191]: 2026-01-27 15:54:01.467 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:02 compute-0 nova_compute[185191]: 2026-01-27 15:54:02.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:54:02 compute-0 nova_compute[185191]: 2026-01-27 15:54:02.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:54:03 compute-0 nova_compute[185191]: 2026-01-27 15:54:03.543 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:54:03 compute-0 nova_compute[185191]: 2026-01-27 15:54:03.543 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:54:03 compute-0 nova_compute[185191]: 2026-01-27 15:54:03.544 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:54:03 compute-0 nova_compute[185191]: 2026-01-27 15:54:03.961 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:05 compute-0 nova_compute[185191]: 2026-01-27 15:54:05.027 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Updating instance_info_cache with network_info: [{"id": "d11ff881-6533-4499-87d1-ff504269c883", "address": "fa:16:3e:58:8f:27", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd11ff881-65", "ovs_interfaceid": "d11ff881-6533-4499-87d1-ff504269c883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:54:05 compute-0 nova_compute[185191]: 2026-01-27 15:54:05.050 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:54:05 compute-0 nova_compute[185191]: 2026-01-27 15:54:05.051 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:54:05 compute-0 nova_compute[185191]: 2026-01-27 15:54:05.052 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:54:05 compute-0 nova_compute[185191]: 2026-01-27 15:54:05.053 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:54:05 compute-0 nova_compute[185191]: 2026-01-27 15:54:05.053 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:54:06 compute-0 nova_compute[185191]: 2026-01-27 15:54:06.469 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:08 compute-0 nova_compute[185191]: 2026-01-27 15:54:08.964 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:09 compute-0 podman[257705]: 2026-01-27 15:54:09.308039392 +0000 UTC m=+0.065542258 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 27 15:54:09 compute-0 nova_compute[185191]: 2026-01-27 15:54:09.946 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:54:11 compute-0 podman[257723]: 2026-01-27 15:54:11.320295264 +0000 UTC m=+0.075396443 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, org.label-schema.schema-version=1.0)
Jan 27 15:54:11 compute-0 podman[257725]: 2026-01-27 15:54:11.343842929 +0000 UTC m=+0.091796285 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc.)
Jan 27 15:54:11 compute-0 podman[257724]: 2026-01-27 15:54:11.34797201 +0000 UTC m=+0.099683117 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 15:54:11 compute-0 nova_compute[185191]: 2026-01-27 15:54:11.471 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:13 compute-0 nova_compute[185191]: 2026-01-27 15:54:13.969 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:16 compute-0 nova_compute[185191]: 2026-01-27 15:54:16.473 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:16 compute-0 nova_compute[185191]: 2026-01-27 15:54:16.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:54:18 compute-0 nova_compute[185191]: 2026-01-27 15:54:18.973 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:20 compute-0 podman[257785]: 2026-01-27 15:54:20.34272524 +0000 UTC m=+0.099160884 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:54:21 compute-0 nova_compute[185191]: 2026-01-27 15:54:21.475 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:22 compute-0 podman[257804]: 2026-01-27 15:54:22.306740942 +0000 UTC m=+0.064197571 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, container_name=kepler, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, name=ubi9, config_id=kepler, build-date=2024-09-18T21:23:30, version=9.4, managed_by=edpm_ansible)
Jan 27 15:54:22 compute-0 podman[257805]: 2026-01-27 15:54:22.328509809 +0000 UTC m=+0.081476657 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:54:23 compute-0 nova_compute[185191]: 2026-01-27 15:54:23.975 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:26 compute-0 podman[257847]: 2026-01-27 15:54:26.338239535 +0000 UTC m=+0.092430772 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:54:26 compute-0 nova_compute[185191]: 2026-01-27 15:54:26.477 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:28 compute-0 nova_compute[185191]: 2026-01-27 15:54:28.979 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:29 compute-0 podman[201073]: time="2026-01-27T15:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:54:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:54:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4385 "" "Go-http-client/1.1"
Jan 27 15:54:31 compute-0 openstack_network_exporter[204239]: ERROR   15:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:54:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:54:31 compute-0 openstack_network_exporter[204239]: ERROR   15:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:54:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:54:31 compute-0 nova_compute[185191]: 2026-01-27 15:54:31.480 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:33 compute-0 nova_compute[185191]: 2026-01-27 15:54:33.983 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:36 compute-0 nova_compute[185191]: 2026-01-27 15:54:36.482 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:38 compute-0 nova_compute[185191]: 2026-01-27 15:54:38.985 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:40 compute-0 podman[257871]: 2026-01-27 15:54:40.313935725 +0000 UTC m=+0.066182915 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 15:54:41 compute-0 nova_compute[185191]: 2026-01-27 15:54:41.484 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:42 compute-0 podman[257889]: 2026-01-27 15:54:42.318687455 +0000 UTC m=+0.072277079 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126)
Jan 27 15:54:42 compute-0 podman[257891]: 2026-01-27 15:54:42.354463799 +0000 UTC m=+0.100392007 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, release=1755695350, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.expose-services=)
Jan 27 15:54:42 compute-0 podman[257890]: 2026-01-27 15:54:42.372053033 +0000 UTC m=+0.110765426 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 27 15:54:43 compute-0 nova_compute[185191]: 2026-01-27 15:54:43.987 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:46 compute-0 nova_compute[185191]: 2026-01-27 15:54:46.486 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:48 compute-0 nova_compute[185191]: 2026-01-27 15:54:48.989 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:50 compute-0 sshd-session[257953]: Invalid user sol from 2.57.122.238 port 48356
Jan 27 15:54:50 compute-0 podman[257955]: 2026-01-27 15:54:50.647378193 +0000 UTC m=+0.107438607 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:54:50 compute-0 sshd-session[257953]: Connection closed by invalid user sol 2.57.122.238 port 48356 [preauth]
Jan 27 15:54:51 compute-0 nova_compute[185191]: 2026-01-27 15:54:51.488 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:51 compute-0 nova_compute[185191]: 2026-01-27 15:54:51.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:54:51 compute-0 nova_compute[185191]: 2026-01-27 15:54:51.975 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:54:51 compute-0 nova_compute[185191]: 2026-01-27 15:54:51.975 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:54:51 compute-0 nova_compute[185191]: 2026-01-27 15:54:51.977 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:54:51 compute-0 nova_compute[185191]: 2026-01-27 15:54:51.977 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.067 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.131 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.132 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.195 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.203 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.267 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.268 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.331 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.645 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.647 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4926MB free_disk=72.28194046020508GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.647 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.648 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.727 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance f8fa4ecf-1446-421b-893d-f2b34f89da54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.727 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance a0b14d34-73c5-426d-8d69-793643148639 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.727 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.728 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.779 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.794 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.795 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:54:52 compute-0 nova_compute[185191]: 2026-01-27 15:54:52.796 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:54:53 compute-0 podman[257987]: 2026-01-27 15:54:53.336101177 +0000 UTC m=+0.088196278 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vcs-type=git, build-date=2024-09-18T21:23:30, config_id=kepler, container_name=kepler, name=ubi9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, version=9.4)
Jan 27 15:54:53 compute-0 podman[257988]: 2026-01-27 15:54:53.354968065 +0000 UTC m=+0.102373740 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:54:53 compute-0 nova_compute[185191]: 2026-01-27 15:54:53.992 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:56 compute-0 nova_compute[185191]: 2026-01-27 15:54:56.489 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:57 compute-0 podman[258029]: 2026-01-27 15:54:57.308024205 +0000 UTC m=+0.066944185 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:54:58 compute-0 sshd-session[258053]: Invalid user solana from 45.148.10.240 port 49066
Jan 27 15:54:58 compute-0 sshd-session[258053]: Connection closed by invalid user solana 45.148.10.240 port 49066 [preauth]
Jan 27 15:54:58 compute-0 nova_compute[185191]: 2026-01-27 15:54:58.994 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:54:59 compute-0 podman[201073]: time="2026-01-27T15:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:54:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:54:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4388 "" "Go-http-client/1.1"
Jan 27 15:55:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:55:00.279 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:55:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:55:00.280 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:55:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:55:00.281 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:55:01 compute-0 openstack_network_exporter[204239]: ERROR   15:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:55:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:55:01 compute-0 openstack_network_exporter[204239]: ERROR   15:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:55:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:55:01 compute-0 nova_compute[185191]: 2026-01-27 15:55:01.491 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:01 compute-0 nova_compute[185191]: 2026-01-27 15:55:01.790 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:55:01 compute-0 nova_compute[185191]: 2026-01-27 15:55:01.791 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:55:01 compute-0 nova_compute[185191]: 2026-01-27 15:55:01.812 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:55:01 compute-0 nova_compute[185191]: 2026-01-27 15:55:01.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:55:03 compute-0 nova_compute[185191]: 2026-01-27 15:55:03.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:55:03 compute-0 nova_compute[185191]: 2026-01-27 15:55:03.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:55:03 compute-0 nova_compute[185191]: 2026-01-27 15:55:03.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:55:03 compute-0 nova_compute[185191]: 2026-01-27 15:55:03.997 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:04 compute-0 nova_compute[185191]: 2026-01-27 15:55:04.657 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:55:04 compute-0 nova_compute[185191]: 2026-01-27 15:55:04.658 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:55:04 compute-0 nova_compute[185191]: 2026-01-27 15:55:04.658 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:55:04 compute-0 nova_compute[185191]: 2026-01-27 15:55:04.658 185195 DEBUG nova.objects.instance [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f8fa4ecf-1446-421b-893d-f2b34f89da54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:55:06 compute-0 nova_compute[185191]: 2026-01-27 15:55:06.041 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updating instance_info_cache with network_info: [{"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:55:06 compute-0 nova_compute[185191]: 2026-01-27 15:55:06.364 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-f8fa4ecf-1446-421b-893d-f2b34f89da54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:55:06 compute-0 nova_compute[185191]: 2026-01-27 15:55:06.365 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:55:06 compute-0 nova_compute[185191]: 2026-01-27 15:55:06.365 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:55:06 compute-0 nova_compute[185191]: 2026-01-27 15:55:06.366 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:55:06 compute-0 nova_compute[185191]: 2026-01-27 15:55:06.366 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:55:06 compute-0 nova_compute[185191]: 2026-01-27 15:55:06.493 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:09 compute-0 nova_compute[185191]: 2026-01-27 15:55:09.000 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:10 compute-0 nova_compute[185191]: 2026-01-27 15:55:10.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:55:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:10.997 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:55:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:10.997 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:55:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:10.997 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:10.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.004 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f8fa4ecf-1446-421b-893d-f2b34f89da54', 'name': 'te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '9d30f498-7a22-4c96-a758-84b2da277162'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '20f0077bc9bd475ebff1667438d2013e', 'user_id': '2f735dc3417d4dc1830a1081fe9a604b', 'hostId': 'a1663c21f4c2586f65c0b6541b29f837c5b2bf25e66085f6a331b3fc', 'status': 'active', 'metadata': {'metering.server_group': 'b3308bb6-f54d-4153-86c0-fa8fa74a39af'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.007 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a0b14d34-73c5-426d-8d69-793643148639', 'name': 'te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt', 'flavor': {'id': 'aed09843-3292-40b2-b829-c4ed118e135f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '9d30f498-7a22-4c96-a758-84b2da277162'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '20f0077bc9bd475ebff1667438d2013e', 'user_id': '2f735dc3417d4dc1830a1081fe9a604b', 'hostId': 'a1663c21f4c2586f65c0b6541b29f837c5b2bf25e66085f6a331b3fc', 'status': 'active', 'metadata': {'metering.server_group': 'b3308bb6-f54d-4153-86c0-fa8fa74a39af'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.007 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.008 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T15:55:11.008207) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.047 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.latency volume: 3634122906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.048 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.095 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.latency volume: 22077236750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.096 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.096 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.096 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.097 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.097 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.097 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.097 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.requests volume: 342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.097 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T15:55:11.097228) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.098 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.098 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.requests volume: 306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.098 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.099 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.099 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.099 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.099 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.099 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.100 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T15:55:11.099717) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.114 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.114 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.131 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.131 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.131 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.132 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.132 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T15:55:11.132295) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.136 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.139 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.139 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.139 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.139 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.140 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.140 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.140 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.140 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.140 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.141 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T15:55:11.140161) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.141 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.141 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.141 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.141 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.141 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T15:55:11.141470) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.142 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.142 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.142 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.142 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.142 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.142 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.142 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.143 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T15:55:11.142543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.143 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.143 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.143 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.143 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.143 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.143 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.144 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T15:55:11.143727) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.162 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/cpu volume: 339430000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.179 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/cpu volume: 338620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.180 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.180 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.180 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da622270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.180 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da622270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.180 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.181 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.181 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T15:55:11.180908) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.182 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.182 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.182 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T15:55:11.182250) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.182 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.183 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.183 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.183 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.183 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.183 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.183 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/memory.usage volume: 42.33203125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T15:55:11.183539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.184 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/memory.usage volume: 46.2578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.184 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.184 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.184 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.184 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.184 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.184 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T15:55:11.184792) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.185 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.185 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.185 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.185 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.186 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.186 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.186 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.186 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.186 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T15:55:11.186225) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.186 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.187 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.187 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.187 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.187 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.187 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.187 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.187 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.187 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.188 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T15:55:11.187474) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.188 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T15:55:11.188983) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.189 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.189 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.189 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.190 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.190 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.190 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.190 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.190 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.190 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T15:55:11.190293) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.190 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.190 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.191 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.191 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.191 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.192 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T15:55:11.192087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.192 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.192 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.192 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.193 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.193 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.193 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.193 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.193 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.bytes volume: 30153216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.193 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.194 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.bytes volume: 30468608 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.194 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T15:55:11.193396) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.194 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.195 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.195 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.195 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T15:55:11.195262) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.195 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.196 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.196 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.196 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.196 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.196 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T15:55:11.196562) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.196 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.latency volume: 1063775781 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.197 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.latency volume: 107780486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.197 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.latency volume: 2534265495 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.197 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.latency volume: 265016544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.197 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.198 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.198 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.198 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.198 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.198 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.198 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.requests volume: 1083 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.198 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.199 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.requests volume: 1087 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.199 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.199 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.200 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.200 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.200 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T15:55:11.198327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.200 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.200 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.200 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.200 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T15:55:11.200502) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.200 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.201 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.201 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.202 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.202 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.202 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.202 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.203 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.203 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T15:55:11.202328) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.203 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.203 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.203 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.203 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.bytes volume: 73179136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T15:55:11.203775) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.204 14 DEBUG ceilometer.compute.pollsters [-] f8fa4ecf-1446-421b-893d-f2b34f89da54/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.204 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.204 14 DEBUG ceilometer.compute.pollsters [-] a0b14d34-73c5-426d-8d69-793643148639/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.205 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:55:11.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:55:11 compute-0 podman[258056]: 2026-01-27 15:55:11.330561169 +0000 UTC m=+0.086149313 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 15:55:11 compute-0 nova_compute[185191]: 2026-01-27 15:55:11.495 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:13 compute-0 podman[258075]: 2026-01-27 15:55:13.313789859 +0000 UTC m=+0.071306843 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 15:55:13 compute-0 podman[258077]: 2026-01-27 15:55:13.329933264 +0000 UTC m=+0.075001783 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 27 15:55:13 compute-0 podman[258076]: 2026-01-27 15:55:13.373326873 +0000 UTC m=+0.122112832 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 15:55:14 compute-0 nova_compute[185191]: 2026-01-27 15:55:14.003 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:16 compute-0 nova_compute[185191]: 2026-01-27 15:55:16.497 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:18 compute-0 nova_compute[185191]: 2026-01-27 15:55:18.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:55:19 compute-0 nova_compute[185191]: 2026-01-27 15:55:19.006 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:21 compute-0 podman[258138]: 2026-01-27 15:55:21.329752567 +0000 UTC m=+0.084135468 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Jan 27 15:55:21 compute-0 nova_compute[185191]: 2026-01-27 15:55:21.499 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:24 compute-0 nova_compute[185191]: 2026-01-27 15:55:24.007 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:24 compute-0 podman[258157]: 2026-01-27 15:55:24.310267816 +0000 UTC m=+0.062514176 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:55:24 compute-0 podman[258156]: 2026-01-27 15:55:24.314124619 +0000 UTC m=+0.070891721 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, vendor=Red Hat, Inc., version=9.4, container_name=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=base rhel9, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9, architecture=x86_64, com.redhat.component=ubi9-container, config_id=kepler, managed_by=edpm_ansible)
Jan 27 15:55:26 compute-0 nova_compute[185191]: 2026-01-27 15:55:26.502 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:28 compute-0 podman[258198]: 2026-01-27 15:55:28.342520712 +0000 UTC m=+0.097005995 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 15:55:29 compute-0 nova_compute[185191]: 2026-01-27 15:55:29.010 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:29 compute-0 podman[201073]: time="2026-01-27T15:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:55:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:55:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4390 "" "Go-http-client/1.1"
Jan 27 15:55:31 compute-0 openstack_network_exporter[204239]: ERROR   15:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:55:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:55:31 compute-0 openstack_network_exporter[204239]: ERROR   15:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:55:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:55:31 compute-0 nova_compute[185191]: 2026-01-27 15:55:31.504 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:34 compute-0 nova_compute[185191]: 2026-01-27 15:55:34.013 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:36 compute-0 nova_compute[185191]: 2026-01-27 15:55:36.506 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:39 compute-0 nova_compute[185191]: 2026-01-27 15:55:39.016 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:41 compute-0 nova_compute[185191]: 2026-01-27 15:55:41.507 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:42 compute-0 podman[258221]: 2026-01-27 15:55:42.317518118 +0000 UTC m=+0.070188623 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:55:44 compute-0 nova_compute[185191]: 2026-01-27 15:55:44.018 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:44 compute-0 podman[258240]: 2026-01-27 15:55:44.316844832 +0000 UTC m=+0.069503344 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 27 15:55:44 compute-0 podman[258242]: 2026-01-27 15:55:44.316920485 +0000 UTC m=+0.063982776 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, config_id=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:55:44 compute-0 podman[258241]: 2026-01-27 15:55:44.371568087 +0000 UTC m=+0.121424993 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 15:55:46 compute-0 nova_compute[185191]: 2026-01-27 15:55:46.509 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:49 compute-0 nova_compute[185191]: 2026-01-27 15:55:49.022 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:51 compute-0 nova_compute[185191]: 2026-01-27 15:55:51.513 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:52 compute-0 podman[258305]: 2026-01-27 15:55:52.305998568 +0000 UTC m=+0.068624071 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 15:55:53 compute-0 nova_compute[185191]: 2026-01-27 15:55:53.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:55:53 compute-0 nova_compute[185191]: 2026-01-27 15:55:53.972 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:55:53 compute-0 nova_compute[185191]: 2026-01-27 15:55:53.972 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:55:53 compute-0 nova_compute[185191]: 2026-01-27 15:55:53.973 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:55:53 compute-0 nova_compute[185191]: 2026-01-27 15:55:53.973 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.024 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.062 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.124 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.125 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.212 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.220 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.277 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.278 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.366 185195 DEBUG oslo_concurrency.processutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.735 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.736 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4874MB free_disk=72.28197479248047GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.737 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.737 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.949 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance f8fa4ecf-1446-421b-893d-f2b34f89da54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.949 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Instance a0b14d34-73c5-426d-8d69-793643148639 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.950 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:55:54 compute-0 nova_compute[185191]: 2026-01-27 15:55:54.950 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:55:55 compute-0 nova_compute[185191]: 2026-01-27 15:55:55.140 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:55:55 compute-0 nova_compute[185191]: 2026-01-27 15:55:55.184 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:55:55 compute-0 nova_compute[185191]: 2026-01-27 15:55:55.186 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:55:55 compute-0 nova_compute[185191]: 2026-01-27 15:55:55.186 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.449s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:55:55 compute-0 podman[258337]: 2026-01-27 15:55:55.319932566 +0000 UTC m=+0.066127513 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:55:55 compute-0 podman[258336]: 2026-01-27 15:55:55.344385795 +0000 UTC m=+0.098978598 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, distribution-scope=public, build-date=2024-09-18T21:23:30, container_name=kepler, vendor=Red Hat, Inc., release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9)
Jan 27 15:55:56 compute-0 nova_compute[185191]: 2026-01-27 15:55:56.515 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:59 compute-0 nova_compute[185191]: 2026-01-27 15:55:59.028 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:55:59 compute-0 podman[258379]: 2026-01-27 15:55:59.322456159 +0000 UTC m=+0.066683618 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 15:55:59 compute-0 podman[201073]: time="2026-01-27T15:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:55:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:55:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4387 "" "Go-http-client/1.1"
Jan 27 15:56:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:00.280 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:56:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:00.282 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:56:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:00.283 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:56:01 compute-0 openstack_network_exporter[204239]: ERROR   15:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:56:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:56:01 compute-0 openstack_network_exporter[204239]: ERROR   15:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:56:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:56:01 compute-0 nova_compute[185191]: 2026-01-27 15:56:01.517 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:02 compute-0 nova_compute[185191]: 2026-01-27 15:56:02.181 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:56:02 compute-0 nova_compute[185191]: 2026-01-27 15:56:02.182 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:56:02 compute-0 nova_compute[185191]: 2026-01-27 15:56:02.182 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:56:02 compute-0 nova_compute[185191]: 2026-01-27 15:56:02.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:56:02 compute-0 nova_compute[185191]: 2026-01-27 15:56:02.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 15:56:02 compute-0 nova_compute[185191]: 2026-01-27 15:56:02.960 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:56:04 compute-0 nova_compute[185191]: 2026-01-27 15:56:04.030 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:04 compute-0 nova_compute[185191]: 2026-01-27 15:56:04.983 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:56:04 compute-0 nova_compute[185191]: 2026-01-27 15:56:04.984 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:56:05 compute-0 nova_compute[185191]: 2026-01-27 15:56:05.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:56:05 compute-0 nova_compute[185191]: 2026-01-27 15:56:05.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:56:06 compute-0 nova_compute[185191]: 2026-01-27 15:56:06.259 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 15:56:06 compute-0 nova_compute[185191]: 2026-01-27 15:56:06.260 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquired lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 15:56:06 compute-0 nova_compute[185191]: 2026-01-27 15:56:06.260 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 15:56:06 compute-0 nova_compute[185191]: 2026-01-27 15:56:06.519 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:09 compute-0 nova_compute[185191]: 2026-01-27 15:56:09.034 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:09 compute-0 nova_compute[185191]: 2026-01-27 15:56:09.723 185195 DEBUG nova.network.neutron [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Updating instance_info_cache with network_info: [{"id": "d11ff881-6533-4499-87d1-ff504269c883", "address": "fa:16:3e:58:8f:27", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd11ff881-65", "ovs_interfaceid": "d11ff881-6533-4499-87d1-ff504269c883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:56:09 compute-0 nova_compute[185191]: 2026-01-27 15:56:09.746 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Releasing lock "refresh_cache-a0b14d34-73c5-426d-8d69-793643148639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 15:56:09 compute-0 nova_compute[185191]: 2026-01-27 15:56:09.746 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 15:56:09 compute-0 nova_compute[185191]: 2026-01-27 15:56:09.747 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:56:10 compute-0 nova_compute[185191]: 2026-01-27 15:56:10.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:56:11 compute-0 nova_compute[185191]: 2026-01-27 15:56:11.521 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:13 compute-0 podman[258401]: 2026-01-27 15:56:13.29615878 +0000 UTC m=+0.050262346 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 15:56:14 compute-0 nova_compute[185191]: 2026-01-27 15:56:14.038 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:14 compute-0 podman[258423]: 2026-01-27 15:56:14.758886921 +0000 UTC m=+0.070411749 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 27 15:56:14 compute-0 podman[258420]: 2026-01-27 15:56:14.761968274 +0000 UTC m=+0.087898560 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:56:14 compute-0 podman[258421]: 2026-01-27 15:56:14.799117715 +0000 UTC m=+0.117047575 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 27 15:56:16 compute-0 nova_compute[185191]: 2026-01-27 15:56:16.523 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:19 compute-0 nova_compute[185191]: 2026-01-27 15:56:19.039 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:20 compute-0 nova_compute[185191]: 2026-01-27 15:56:20.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:56:20 compute-0 nova_compute[185191]: 2026-01-27 15:56:20.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:56:20 compute-0 nova_compute[185191]: 2026-01-27 15:56:20.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 15:56:20 compute-0 nova_compute[185191]: 2026-01-27 15:56:20.959 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 15:56:21 compute-0 nova_compute[185191]: 2026-01-27 15:56:21.527 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:23 compute-0 podman[258484]: 2026-01-27 15:56:23.338966695 +0000 UTC m=+0.095969417 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 15:56:24 compute-0 nova_compute[185191]: 2026-01-27 15:56:24.040 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.658 185195 DEBUG oslo_concurrency.lockutils [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "f8fa4ecf-1446-421b-893d-f2b34f89da54" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.659 185195 DEBUG oslo_concurrency.lockutils [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.659 185195 DEBUG oslo_concurrency.lockutils [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.659 185195 DEBUG oslo_concurrency.lockutils [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.660 185195 DEBUG oslo_concurrency.lockutils [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.661 185195 INFO nova.compute.manager [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Terminating instance
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.662 185195 DEBUG nova.compute.manager [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 15:56:25 compute-0 kernel: tap9a8c7659-ad (unregistering): left promiscuous mode
Jan 27 15:56:25 compute-0 NetworkManager[56090]: <info>  [1769529385.6998] device (tap9a8c7659-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 15:56:25 compute-0 ovn_controller[97541]: 2026-01-27T15:56:25Z|00176|binding|INFO|Releasing lport 9a8c7659-ad95-4751-9633-f076227a89a5 from this chassis (sb_readonly=0)
Jan 27 15:56:25 compute-0 ovn_controller[97541]: 2026-01-27T15:56:25Z|00177|binding|INFO|Setting lport 9a8c7659-ad95-4751-9633-f076227a89a5 down in Southbound
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.717 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:25 compute-0 ovn_controller[97541]: 2026-01-27T15:56:25Z|00178|binding|INFO|Removing iface tap9a8c7659-ad ovn-installed in OVS
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.721 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:25.737 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:9a:3f 10.100.1.182'], port_security=['fa:16:3e:9b:9a:3f 10.100.1.182'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.182/16', 'neutron:device_id': 'f8fa4ecf-1446-421b-893d-f2b34f89da54', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-583566c3-a7da-49ba-8c93-87be3496cb80', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20f0077bc9bd475ebff1667438d2013e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0c775d39-0088-4183-837a-f310fb1cc533', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e677173-f8a0-4b87-8946-43d053c4a459, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=9a8c7659-ad95-4751-9633-f076227a89a5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:56:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:25.739 106793 INFO neutron.agent.ovn.metadata.agent [-] Port 9a8c7659-ad95-4751-9633-f076227a89a5 in datapath 583566c3-a7da-49ba-8c93-87be3496cb80 unbound from our chassis
Jan 27 15:56:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:25.740 106793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 583566c3-a7da-49ba-8c93-87be3496cb80
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.754 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:25.767 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4cbe33da-aee7-41d4-a597-f15d7870ff43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:56:25 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Jan 27 15:56:25 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 7min 7.457s CPU time.
Jan 27 15:56:25 compute-0 systemd-machined[156506]: Machine qemu-15-instance-0000000e terminated.
Jan 27 15:56:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:25.805 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[bf11977f-5e39-468c-b767-8c6fc2e9e078]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:56:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:25.809 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[407f5400-d063-4173-9904-18944baa7d22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:56:25 compute-0 podman[258503]: 2026-01-27 15:56:25.838773248 +0000 UTC m=+0.112008670 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vendor=Red Hat, Inc., config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, release=1214.1726694543, managed_by=edpm_ansible, name=ubi9)
Jan 27 15:56:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:25.840 238652 DEBUG oslo.privsep.daemon [-] privsep: reply[b255fca3-ff4d-4147-8de9-c0600970a808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:56:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:25.862 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d74ac78b-ddc1-4428-8777-ac1c8908c7a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap583566c3-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:b6:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607940, 'reachable_time': 36558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258556, 'error': None, 'target': 'ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:56:25 compute-0 podman[258506]: 2026-01-27 15:56:25.876940966 +0000 UTC m=+0.132607235 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:56:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:25.881 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5a950c-def2-4ca4-914c-63c2fa8f4653]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap583566c3-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607951, 'tstamp': 607951}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258558, 'error': None, 'target': 'ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap583566c3-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607954, 'tstamp': 607954}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258558, 'error': None, 'target': 'ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:56:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:25.884 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap583566c3-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.886 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.891 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.898 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:25.899 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap583566c3-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:56:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:25.899 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:56:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:25.899 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap583566c3-a0, col_values=(('external_ids', {'iface-id': '1a1e49d2-439b-4887-8a67-bfa43f528ce6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:56:25 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:25.900 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.931 185195 INFO nova.virt.libvirt.driver [-] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Instance destroyed successfully.
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.931 185195 DEBUG nova.objects.instance [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lazy-loading 'resources' on Instance uuid f8fa4ecf-1446-421b-893d-f2b34f89da54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.952 185195 DEBUG nova.virt.libvirt.vif [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:41:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7693531-asg-o24eqpo3kaph-wtompy66nizt-npp46zsgtdf4',id=14,image_ref='9d30f498-7a22-4c96-a758-84b2da277162',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:41:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='b3308bb6-f54d-4153-86c0-fa8fa74a39af'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='20f0077bc9bd475ebff1667438d2013e',ramdisk_id='',reservation_id='r-ez8uojz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9d30f498-7a22-4c96-a758-84b2da277162',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-349502190',owner_user_name='tempest-PrometheusGabbiTest-349502190-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T15:41:44Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='2f735dc3417d4dc1830a1081fe9a604b',uuid=f8fa4ecf-1446-421b-893d-f2b34f89da54,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.952 185195 DEBUG nova.network.os_vif_util [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Converting VIF {"id": "9a8c7659-ad95-4751-9633-f076227a89a5", "address": "fa:16:3e:9b:9a:3f", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a8c7659-ad", "ovs_interfaceid": "9a8c7659-ad95-4751-9633-f076227a89a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.953 185195 DEBUG nova.network.os_vif_util [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:9a:3f,bridge_name='br-int',has_traffic_filtering=True,id=9a8c7659-ad95-4751-9633-f076227a89a5,network=Network(583566c3-a7da-49ba-8c93-87be3496cb80),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a8c7659-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.953 185195 DEBUG os_vif [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:9a:3f,bridge_name='br-int',has_traffic_filtering=True,id=9a8c7659-ad95-4751-9633-f076227a89a5,network=Network(583566c3-a7da-49ba-8c93-87be3496cb80),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a8c7659-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.955 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.955 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a8c7659-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.956 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.959 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.959 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.961 185195 INFO os_vif [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:9a:3f,bridge_name='br-int',has_traffic_filtering=True,id=9a8c7659-ad95-4751-9633-f076227a89a5,network=Network(583566c3-a7da-49ba-8c93-87be3496cb80),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a8c7659-ad')
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.962 185195 INFO nova.virt.libvirt.driver [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Deleting instance files /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54_del
Jan 27 15:56:25 compute-0 nova_compute[185191]: 2026-01-27 15:56:25.963 185195 INFO nova.virt.libvirt.driver [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Deletion of /var/lib/nova/instances/f8fa4ecf-1446-421b-893d-f2b34f89da54_del complete
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.017 185195 INFO nova.compute.manager [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Took 0.35 seconds to destroy the instance on the hypervisor.
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.018 185195 DEBUG oslo.service.loopingcall [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.018 185195 DEBUG nova.compute.manager [-] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.019 185195 DEBUG nova.network.neutron [-] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.051 185195 DEBUG nova.compute.manager [req-3feef08b-1d31-4e89-a145-240d70145f36 req-deffab1f-3312-46b8-a5b4-a64351f76541 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Received event network-vif-unplugged-9a8c7659-ad95-4751-9633-f076227a89a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.052 185195 DEBUG oslo_concurrency.lockutils [req-3feef08b-1d31-4e89-a145-240d70145f36 req-deffab1f-3312-46b8-a5b4-a64351f76541 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.052 185195 DEBUG oslo_concurrency.lockutils [req-3feef08b-1d31-4e89-a145-240d70145f36 req-deffab1f-3312-46b8-a5b4-a64351f76541 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.052 185195 DEBUG oslo_concurrency.lockutils [req-3feef08b-1d31-4e89-a145-240d70145f36 req-deffab1f-3312-46b8-a5b4-a64351f76541 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.052 185195 DEBUG nova.compute.manager [req-3feef08b-1d31-4e89-a145-240d70145f36 req-deffab1f-3312-46b8-a5b4-a64351f76541 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] No waiting events found dispatching network-vif-unplugged-9a8c7659-ad95-4751-9633-f076227a89a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.052 185195 DEBUG nova.compute.manager [req-3feef08b-1d31-4e89-a145-240d70145f36 req-deffab1f-3312-46b8-a5b4-a64351f76541 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Received event network-vif-unplugged-9a8c7659-ad95-4751-9633-f076227a89a5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 15:56:26 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:26.300 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:56:26 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:26.301 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.303 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.529 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.766 185195 DEBUG nova.network.neutron [-] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.785 185195 INFO nova.compute.manager [-] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Took 0.77 seconds to deallocate network for instance.
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.828 185195 DEBUG oslo_concurrency.lockutils [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.829 185195 DEBUG oslo_concurrency.lockutils [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.904 185195 DEBUG nova.compute.provider_tree [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.920 185195 DEBUG nova.scheduler.client.report [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:56:26 compute-0 nova_compute[185191]: 2026-01-27 15:56:26.948 185195 DEBUG oslo_concurrency.lockutils [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:56:27 compute-0 nova_compute[185191]: 2026-01-27 15:56:27.011 185195 INFO nova.scheduler.client.report [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Deleted allocations for instance f8fa4ecf-1446-421b-893d-f2b34f89da54
Jan 27 15:56:27 compute-0 nova_compute[185191]: 2026-01-27 15:56:27.101 185195 DEBUG oslo_concurrency.lockutils [None req-5ac4f6c4-3f22-43c9-82ea-23260bd4c7d0 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:56:28 compute-0 nova_compute[185191]: 2026-01-27 15:56:28.147 185195 DEBUG nova.compute.manager [req-a55d505b-c932-456b-bf84-358874631fee req-6fc27121-9427-4002-a94c-a860114f77c2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Received event network-vif-plugged-9a8c7659-ad95-4751-9633-f076227a89a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:56:28 compute-0 nova_compute[185191]: 2026-01-27 15:56:28.147 185195 DEBUG oslo_concurrency.lockutils [req-a55d505b-c932-456b-bf84-358874631fee req-6fc27121-9427-4002-a94c-a860114f77c2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:56:28 compute-0 nova_compute[185191]: 2026-01-27 15:56:28.148 185195 DEBUG oslo_concurrency.lockutils [req-a55d505b-c932-456b-bf84-358874631fee req-6fc27121-9427-4002-a94c-a860114f77c2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:56:28 compute-0 nova_compute[185191]: 2026-01-27 15:56:28.148 185195 DEBUG oslo_concurrency.lockutils [req-a55d505b-c932-456b-bf84-358874631fee req-6fc27121-9427-4002-a94c-a860114f77c2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "f8fa4ecf-1446-421b-893d-f2b34f89da54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:56:28 compute-0 nova_compute[185191]: 2026-01-27 15:56:28.148 185195 DEBUG nova.compute.manager [req-a55d505b-c932-456b-bf84-358874631fee req-6fc27121-9427-4002-a94c-a860114f77c2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] No waiting events found dispatching network-vif-plugged-9a8c7659-ad95-4751-9633-f076227a89a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:56:28 compute-0 nova_compute[185191]: 2026-01-27 15:56:28.148 185195 WARNING nova.compute.manager [req-a55d505b-c932-456b-bf84-358874631fee req-6fc27121-9427-4002-a94c-a860114f77c2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Received unexpected event network-vif-plugged-9a8c7659-ad95-4751-9633-f076227a89a5 for instance with vm_state deleted and task_state None.
Jan 27 15:56:28 compute-0 nova_compute[185191]: 2026-01-27 15:56:28.148 185195 DEBUG nova.compute.manager [req-a55d505b-c932-456b-bf84-358874631fee req-6fc27121-9427-4002-a94c-a860114f77c2 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Received event network-vif-deleted-9a8c7659-ad95-4751-9633-f076227a89a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:56:29 compute-0 podman[201073]: time="2026-01-27T15:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:56:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 15:56:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4396 "" "Go-http-client/1.1"
Jan 27 15:56:30 compute-0 podman[258576]: 2026-01-27 15:56:30.356457604 +0000 UTC m=+0.103829019 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.692 185195 DEBUG oslo_concurrency.lockutils [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "a0b14d34-73c5-426d-8d69-793643148639" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.693 185195 DEBUG oslo_concurrency.lockutils [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "a0b14d34-73c5-426d-8d69-793643148639" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.693 185195 DEBUG oslo_concurrency.lockutils [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "a0b14d34-73c5-426d-8d69-793643148639-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.693 185195 DEBUG oslo_concurrency.lockutils [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "a0b14d34-73c5-426d-8d69-793643148639-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.694 185195 DEBUG oslo_concurrency.lockutils [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "a0b14d34-73c5-426d-8d69-793643148639-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.695 185195 INFO nova.compute.manager [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Terminating instance
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.697 185195 DEBUG nova.compute.manager [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 15:56:30 compute-0 kernel: tapd11ff881-65 (unregistering): left promiscuous mode
Jan 27 15:56:30 compute-0 NetworkManager[56090]: <info>  [1769529390.7335] device (tapd11ff881-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 15:56:30 compute-0 ovn_controller[97541]: 2026-01-27T15:56:30Z|00179|binding|INFO|Releasing lport d11ff881-6533-4499-87d1-ff504269c883 from this chassis (sb_readonly=0)
Jan 27 15:56:30 compute-0 ovn_controller[97541]: 2026-01-27T15:56:30Z|00180|binding|INFO|Setting lport d11ff881-6533-4499-87d1-ff504269c883 down in Southbound
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.738 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:30 compute-0 ovn_controller[97541]: 2026-01-27T15:56:30Z|00181|binding|INFO|Removing iface tapd11ff881-65 ovn-installed in OVS
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.744 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:30.759 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:8f:27 10.100.1.167'], port_security=['fa:16:3e:58:8f:27 10.100.1.167'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.167/16', 'neutron:device_id': 'a0b14d34-73c5-426d-8d69-793643148639', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-583566c3-a7da-49ba-8c93-87be3496cb80', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20f0077bc9bd475ebff1667438d2013e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0c775d39-0088-4183-837a-f310fb1cc533', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e677173-f8a0-4b87-8946-43d053c4a459, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>], logical_port=d11ff881-6533-4499-87d1-ff504269c883) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdbd70a8d90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 15:56:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:30.761 106793 INFO neutron.agent.ovn.metadata.agent [-] Port d11ff881-6533-4499-87d1-ff504269c883 in datapath 583566c3-a7da-49ba-8c93-87be3496cb80 unbound from our chassis
Jan 27 15:56:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:30.763 106793 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 583566c3-a7da-49ba-8c93-87be3496cb80, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.763 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:30.764 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[d5cb7e9e-2ffa-4190-b229-212d953cde0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:56:30 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:30.765 106793 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80 namespace which is not needed anymore
Jan 27 15:56:30 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Jan 27 15:56:30 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 6min 37.227s CPU time.
Jan 27 15:56:30 compute-0 systemd-machined[156506]: Machine qemu-16-instance-0000000f terminated.
Jan 27 15:56:30 compute-0 neutron-haproxy-ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80[253369]: [NOTICE]   (253373) : haproxy version is 2.8.14-c23fe91
Jan 27 15:56:30 compute-0 neutron-haproxy-ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80[253369]: [NOTICE]   (253373) : path to executable is /usr/sbin/haproxy
Jan 27 15:56:30 compute-0 neutron-haproxy-ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80[253369]: [WARNING]  (253373) : Exiting Master process...
Jan 27 15:56:30 compute-0 neutron-haproxy-ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80[253369]: [ALERT]    (253373) : Current worker (253375) exited with code 143 (Terminated)
Jan 27 15:56:30 compute-0 neutron-haproxy-ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80[253369]: [WARNING]  (253373) : All workers exited. Exiting... (0)
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.922 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:30 compute-0 systemd[1]: libpod-bb7cb3a6e1b1b657c458eab2de440ede5023466c703aa0c79bca7234b6da4964.scope: Deactivated successfully.
Jan 27 15:56:30 compute-0 podman[258625]: 2026-01-27 15:56:30.929040376 +0000 UTC m=+0.057570733 container died bb7cb3a6e1b1b657c458eab2de440ede5023466c703aa0c79bca7234b6da4964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.929 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.956 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb7cb3a6e1b1b657c458eab2de440ede5023466c703aa0c79bca7234b6da4964-userdata-shm.mount: Deactivated successfully.
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.971 185195 INFO nova.virt.libvirt.driver [-] [instance: a0b14d34-73c5-426d-8d69-793643148639] Instance destroyed successfully.
Jan 27 15:56:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-311ae5e2c4d666edcd5b8091e064f610f650003f0cb61f01650c01a5d1365fe7-merged.mount: Deactivated successfully.
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.972 185195 DEBUG nova.objects.instance [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lazy-loading 'resources' on Instance uuid a0b14d34-73c5-426d-8d69-793643148639 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 15:56:30 compute-0 podman[258625]: 2026-01-27 15:56:30.9863339 +0000 UTC m=+0.114864257 container cleanup bb7cb3a6e1b1b657c458eab2de440ede5023466c703aa0c79bca7234b6da4964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.988 185195 DEBUG nova.virt.libvirt.vif [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T15:46:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7693531-asg-o24eqpo3kaph-utdti367a3ld-lmukwdocg7xt',id=15,image_ref='9d30f498-7a22-4c96-a758-84b2da277162',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T15:46:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='b3308bb6-f54d-4153-86c0-fa8fa74a39af'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='20f0077bc9bd475ebff1667438d2013e',ramdisk_id='',reservation_id='r-xovtw6fb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9d30f498-7a22-4c96-a758-84b2da277162',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-349502190',owner_user_name='tempest-PrometheusGabbiTest-349502190-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T15:46:35Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='2f735dc3417d4dc1830a1081fe9a604b',uuid=a0b14d34-73c5-426d-8d69-793643148639,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d11ff881-6533-4499-87d1-ff504269c883", "address": "fa:16:3e:58:8f:27", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd11ff881-65", "ovs_interfaceid": "d11ff881-6533-4499-87d1-ff504269c883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.989 185195 DEBUG nova.network.os_vif_util [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Converting VIF {"id": "d11ff881-6533-4499-87d1-ff504269c883", "address": "fa:16:3e:58:8f:27", "network": {"id": "583566c3-a7da-49ba-8c93-87be3496cb80", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20f0077bc9bd475ebff1667438d2013e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd11ff881-65", "ovs_interfaceid": "d11ff881-6533-4499-87d1-ff504269c883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.990 185195 DEBUG nova.network.os_vif_util [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:8f:27,bridge_name='br-int',has_traffic_filtering=True,id=d11ff881-6533-4499-87d1-ff504269c883,network=Network(583566c3-a7da-49ba-8c93-87be3496cb80),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd11ff881-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.991 185195 DEBUG os_vif [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:8f:27,bridge_name='br-int',has_traffic_filtering=True,id=d11ff881-6533-4499-87d1-ff504269c883,network=Network(583566c3-a7da-49ba-8c93-87be3496cb80),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd11ff881-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.993 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.993 185195 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd11ff881-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:56:30 compute-0 systemd[1]: libpod-conmon-bb7cb3a6e1b1b657c458eab2de440ede5023466c703aa0c79bca7234b6da4964.scope: Deactivated successfully.
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.995 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:30 compute-0 nova_compute[185191]: 2026-01-27 15:56:30.999 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:31 compute-0 nova_compute[185191]: 2026-01-27 15:56:31.002 185195 INFO os_vif [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:8f:27,bridge_name='br-int',has_traffic_filtering=True,id=d11ff881-6533-4499-87d1-ff504269c883,network=Network(583566c3-a7da-49ba-8c93-87be3496cb80),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd11ff881-65')
Jan 27 15:56:31 compute-0 nova_compute[185191]: 2026-01-27 15:56:31.004 185195 INFO nova.virt.libvirt.driver [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Deleting instance files /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639_del
Jan 27 15:56:31 compute-0 nova_compute[185191]: 2026-01-27 15:56:31.005 185195 INFO nova.virt.libvirt.driver [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Deletion of /var/lib/nova/instances/a0b14d34-73c5-426d-8d69-793643148639_del complete
Jan 27 15:56:31 compute-0 podman[258668]: 2026-01-27 15:56:31.058276809 +0000 UTC m=+0.042670651 container remove bb7cb3a6e1b1b657c458eab2de440ede5023466c703aa0c79bca7234b6da4964 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 27 15:56:31 compute-0 nova_compute[185191]: 2026-01-27 15:56:31.061 185195 INFO nova.compute.manager [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Took 0.36 seconds to destroy the instance on the hypervisor.
Jan 27 15:56:31 compute-0 nova_compute[185191]: 2026-01-27 15:56:31.062 185195 DEBUG oslo.service.loopingcall [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 15:56:31 compute-0 nova_compute[185191]: 2026-01-27 15:56:31.062 185195 DEBUG nova.compute.manager [-] [instance: a0b14d34-73c5-426d-8d69-793643148639] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 15:56:31 compute-0 nova_compute[185191]: 2026-01-27 15:56:31.062 185195 DEBUG nova.network.neutron [-] [instance: a0b14d34-73c5-426d-8d69-793643148639] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 15:56:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:31.064 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[5314a5d8-2e07-4c4b-88d9-4b4a14e61d86]: (4, ('Tue Jan 27 03:56:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80 (bb7cb3a6e1b1b657c458eab2de440ede5023466c703aa0c79bca7234b6da4964)\nbb7cb3a6e1b1b657c458eab2de440ede5023466c703aa0c79bca7234b6da4964\nTue Jan 27 03:56:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80 (bb7cb3a6e1b1b657c458eab2de440ede5023466c703aa0c79bca7234b6da4964)\nbb7cb3a6e1b1b657c458eab2de440ede5023466c703aa0c79bca7234b6da4964\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:56:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:31.066 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[4004426d-a5ff-4c76-896f-aed65ef70b47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:56:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:31.067 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap583566c3-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:56:31 compute-0 nova_compute[185191]: 2026-01-27 15:56:31.069 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:31 compute-0 kernel: tap583566c3-a0: left promiscuous mode
Jan 27 15:56:31 compute-0 nova_compute[185191]: 2026-01-27 15:56:31.071 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:31.075 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0bbd8e-77da-40d6-9391-5333db679625]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:56:31 compute-0 nova_compute[185191]: 2026-01-27 15:56:31.087 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:31.090 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[1593dc04-3fd5-450d-9810-2ab2ba29ca1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:56:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:31.091 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[af019565-55ee-4650-a03f-7c0e8595ee6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:56:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:31.105 238613 DEBUG oslo.privsep.daemon [-] privsep: reply[e7173587-84c4-43dc-ac17-4c1e67fff898]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607933, 'reachable_time': 22327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258682, 'error': None, 'target': 'ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:56:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:31.107 107308 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-583566c3-a7da-49ba-8c93-87be3496cb80 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 15:56:31 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:31.107 107308 DEBUG oslo.privsep.daemon [-] privsep: reply[84e29ce4-39fb-4204-a4f1-ed911240eb9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 15:56:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d583566c3\x2da7da\x2d49ba\x2d8c93\x2d87be3496cb80.mount: Deactivated successfully.
Jan 27 15:56:31 compute-0 openstack_network_exporter[204239]: ERROR   15:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:56:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:56:31 compute-0 openstack_network_exporter[204239]: ERROR   15:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:56:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:56:31 compute-0 nova_compute[185191]: 2026-01-27 15:56:31.532 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:33 compute-0 nova_compute[185191]: 2026-01-27 15:56:33.882 185195 DEBUG nova.compute.manager [req-9ed28759-ae6b-4c45-851d-82902248c535 req-3e4e4ac1-5685-498c-9680-0e8653f16c2e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Received event network-vif-unplugged-d11ff881-6533-4499-87d1-ff504269c883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:56:33 compute-0 nova_compute[185191]: 2026-01-27 15:56:33.883 185195 DEBUG oslo_concurrency.lockutils [req-9ed28759-ae6b-4c45-851d-82902248c535 req-3e4e4ac1-5685-498c-9680-0e8653f16c2e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "a0b14d34-73c5-426d-8d69-793643148639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:56:33 compute-0 nova_compute[185191]: 2026-01-27 15:56:33.883 185195 DEBUG oslo_concurrency.lockutils [req-9ed28759-ae6b-4c45-851d-82902248c535 req-3e4e4ac1-5685-498c-9680-0e8653f16c2e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "a0b14d34-73c5-426d-8d69-793643148639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:56:33 compute-0 nova_compute[185191]: 2026-01-27 15:56:33.883 185195 DEBUG oslo_concurrency.lockutils [req-9ed28759-ae6b-4c45-851d-82902248c535 req-3e4e4ac1-5685-498c-9680-0e8653f16c2e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "a0b14d34-73c5-426d-8d69-793643148639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:56:33 compute-0 nova_compute[185191]: 2026-01-27 15:56:33.883 185195 DEBUG nova.compute.manager [req-9ed28759-ae6b-4c45-851d-82902248c535 req-3e4e4ac1-5685-498c-9680-0e8653f16c2e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] No waiting events found dispatching network-vif-unplugged-d11ff881-6533-4499-87d1-ff504269c883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:56:33 compute-0 nova_compute[185191]: 2026-01-27 15:56:33.883 185195 DEBUG nova.compute.manager [req-9ed28759-ae6b-4c45-851d-82902248c535 req-3e4e4ac1-5685-498c-9680-0e8653f16c2e 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Received event network-vif-unplugged-d11ff881-6533-4499-87d1-ff504269c883 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 15:56:34 compute-0 nova_compute[185191]: 2026-01-27 15:56:34.075 185195 DEBUG nova.network.neutron [-] [instance: a0b14d34-73c5-426d-8d69-793643148639] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 15:56:34 compute-0 nova_compute[185191]: 2026-01-27 15:56:34.097 185195 INFO nova.compute.manager [-] [instance: a0b14d34-73c5-426d-8d69-793643148639] Took 3.03 seconds to deallocate network for instance.
Jan 27 15:56:34 compute-0 nova_compute[185191]: 2026-01-27 15:56:34.143 185195 DEBUG oslo_concurrency.lockutils [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:56:34 compute-0 nova_compute[185191]: 2026-01-27 15:56:34.144 185195 DEBUG oslo_concurrency.lockutils [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:56:34 compute-0 nova_compute[185191]: 2026-01-27 15:56:34.205 185195 DEBUG nova.compute.provider_tree [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:56:34 compute-0 nova_compute[185191]: 2026-01-27 15:56:34.221 185195 DEBUG nova.scheduler.client.report [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:56:34 compute-0 nova_compute[185191]: 2026-01-27 15:56:34.247 185195 DEBUG oslo_concurrency.lockutils [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:56:34 compute-0 nova_compute[185191]: 2026-01-27 15:56:34.273 185195 INFO nova.scheduler.client.report [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Deleted allocations for instance a0b14d34-73c5-426d-8d69-793643148639
Jan 27 15:56:34 compute-0 nova_compute[185191]: 2026-01-27 15:56:34.345 185195 DEBUG oslo_concurrency.lockutils [None req-20691fc5-24d4-421e-87ac-568fc61372d1 2f735dc3417d4dc1830a1081fe9a604b 20f0077bc9bd475ebff1667438d2013e - - default default] Lock "a0b14d34-73c5-426d-8d69-793643148639" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:56:35 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:56:35.303 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 15:56:35 compute-0 nova_compute[185191]: 2026-01-27 15:56:35.998 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:36 compute-0 nova_compute[185191]: 2026-01-27 15:56:36.037 185195 DEBUG nova.compute.manager [req-19721ad3-2181-4944-8e87-c38ce7181cbc req-6257c8dc-2c7a-402a-91fe-872053c72c2c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Received event network-vif-plugged-d11ff881-6533-4499-87d1-ff504269c883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:56:36 compute-0 nova_compute[185191]: 2026-01-27 15:56:36.037 185195 DEBUG oslo_concurrency.lockutils [req-19721ad3-2181-4944-8e87-c38ce7181cbc req-6257c8dc-2c7a-402a-91fe-872053c72c2c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Acquiring lock "a0b14d34-73c5-426d-8d69-793643148639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:56:36 compute-0 nova_compute[185191]: 2026-01-27 15:56:36.037 185195 DEBUG oslo_concurrency.lockutils [req-19721ad3-2181-4944-8e87-c38ce7181cbc req-6257c8dc-2c7a-402a-91fe-872053c72c2c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "a0b14d34-73c5-426d-8d69-793643148639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:56:36 compute-0 nova_compute[185191]: 2026-01-27 15:56:36.038 185195 DEBUG oslo_concurrency.lockutils [req-19721ad3-2181-4944-8e87-c38ce7181cbc req-6257c8dc-2c7a-402a-91fe-872053c72c2c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] Lock "a0b14d34-73c5-426d-8d69-793643148639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:56:36 compute-0 nova_compute[185191]: 2026-01-27 15:56:36.038 185195 DEBUG nova.compute.manager [req-19721ad3-2181-4944-8e87-c38ce7181cbc req-6257c8dc-2c7a-402a-91fe-872053c72c2c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] No waiting events found dispatching network-vif-plugged-d11ff881-6533-4499-87d1-ff504269c883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 15:56:36 compute-0 nova_compute[185191]: 2026-01-27 15:56:36.038 185195 WARNING nova.compute.manager [req-19721ad3-2181-4944-8e87-c38ce7181cbc req-6257c8dc-2c7a-402a-91fe-872053c72c2c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Received unexpected event network-vif-plugged-d11ff881-6533-4499-87d1-ff504269c883 for instance with vm_state deleted and task_state None.
Jan 27 15:56:36 compute-0 nova_compute[185191]: 2026-01-27 15:56:36.039 185195 DEBUG nova.compute.manager [req-19721ad3-2181-4944-8e87-c38ce7181cbc req-6257c8dc-2c7a-402a-91fe-872053c72c2c 394edb4b081b4169b85eaaacfc9895d4 bd89bc38d77e47be953ee2569b794180 - - default default] [instance: a0b14d34-73c5-426d-8d69-793643148639] Received event network-vif-deleted-d11ff881-6533-4499-87d1-ff504269c883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 15:56:36 compute-0 nova_compute[185191]: 2026-01-27 15:56:36.536 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:40 compute-0 nova_compute[185191]: 2026-01-27 15:56:40.929 185195 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769529385.9269266, f8fa4ecf-1446-421b-893d-f2b34f89da54 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:56:40 compute-0 nova_compute[185191]: 2026-01-27 15:56:40.930 185195 INFO nova.compute.manager [-] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] VM Stopped (Lifecycle Event)
Jan 27 15:56:40 compute-0 nova_compute[185191]: 2026-01-27 15:56:40.954 185195 DEBUG nova.compute.manager [None req-c46067b6-43b1-47cf-a539-cec02117e8c7 - - - - - -] [instance: f8fa4ecf-1446-421b-893d-f2b34f89da54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:56:41 compute-0 nova_compute[185191]: 2026-01-27 15:56:41.001 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:41 compute-0 nova_compute[185191]: 2026-01-27 15:56:41.539 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:44 compute-0 podman[258684]: 2026-01-27 15:56:44.301818843 +0000 UTC m=+0.058822006 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 15:56:45 compute-0 podman[258702]: 2026-01-27 15:56:45.320535789 +0000 UTC m=+0.079214066 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 15:56:45 compute-0 podman[258704]: 2026-01-27 15:56:45.370107945 +0000 UTC m=+0.108773113 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, distribution-scope=public, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9-minimal)
Jan 27 15:56:45 compute-0 podman[258703]: 2026-01-27 15:56:45.37362415 +0000 UTC m=+0.129625815 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 27 15:56:45 compute-0 nova_compute[185191]: 2026-01-27 15:56:45.968 185195 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769529390.9665928, a0b14d34-73c5-426d-8d69-793643148639 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 15:56:45 compute-0 nova_compute[185191]: 2026-01-27 15:56:45.968 185195 INFO nova.compute.manager [-] [instance: a0b14d34-73c5-426d-8d69-793643148639] VM Stopped (Lifecycle Event)
Jan 27 15:56:45 compute-0 nova_compute[185191]: 2026-01-27 15:56:45.988 185195 DEBUG nova.compute.manager [None req-c01717c1-b97a-4902-8ccb-90358919fc90 - - - - - -] [instance: a0b14d34-73c5-426d-8d69-793643148639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 15:56:46 compute-0 nova_compute[185191]: 2026-01-27 15:56:46.004 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:46 compute-0 nova_compute[185191]: 2026-01-27 15:56:46.542 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:46 compute-0 nova_compute[185191]: 2026-01-27 15:56:46.937 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:51 compute-0 nova_compute[185191]: 2026-01-27 15:56:51.008 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:51 compute-0 nova_compute[185191]: 2026-01-27 15:56:51.544 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:53 compute-0 nova_compute[185191]: 2026-01-27 15:56:53.958 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:56:53 compute-0 nova_compute[185191]: 2026-01-27 15:56:53.989 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:56:53 compute-0 nova_compute[185191]: 2026-01-27 15:56:53.989 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:56:53 compute-0 nova_compute[185191]: 2026-01-27 15:56:53.990 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:56:53 compute-0 nova_compute[185191]: 2026-01-27 15:56:53.990 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:56:54 compute-0 podman[258764]: 2026-01-27 15:56:54.159783164 +0000 UTC m=+0.120267852 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 15:56:54 compute-0 nova_compute[185191]: 2026-01-27 15:56:54.318 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:56:54 compute-0 nova_compute[185191]: 2026-01-27 15:56:54.320 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5333MB free_disk=72.33986282348633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:56:54 compute-0 nova_compute[185191]: 2026-01-27 15:56:54.320 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:56:54 compute-0 nova_compute[185191]: 2026-01-27 15:56:54.320 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:56:54 compute-0 nova_compute[185191]: 2026-01-27 15:56:54.400 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:56:54 compute-0 nova_compute[185191]: 2026-01-27 15:56:54.400 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:56:54 compute-0 nova_compute[185191]: 2026-01-27 15:56:54.419 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing inventories for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 15:56:54 compute-0 nova_compute[185191]: 2026-01-27 15:56:54.437 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating ProviderTree inventory for provider dbf037fd-3291-487b-ae9c-69178dae2ebc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 15:56:54 compute-0 nova_compute[185191]: 2026-01-27 15:56:54.437 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 15:56:54 compute-0 nova_compute[185191]: 2026-01-27 15:56:54.455 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing aggregate associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 15:56:54 compute-0 nova_compute[185191]: 2026-01-27 15:56:54.474 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing trait associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, traits: HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_ACCELERATORS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 15:56:54 compute-0 nova_compute[185191]: 2026-01-27 15:56:54.496 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:56:54 compute-0 nova_compute[185191]: 2026-01-27 15:56:54.523 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:56:54 compute-0 nova_compute[185191]: 2026-01-27 15:56:54.549 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:56:54 compute-0 nova_compute[185191]: 2026-01-27 15:56:54.549 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:56:56 compute-0 nova_compute[185191]: 2026-01-27 15:56:56.012 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:56 compute-0 podman[258784]: 2026-01-27 15:56:56.317352614 +0000 UTC m=+0.069569606 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:56:56 compute-0 podman[258783]: 2026-01-27 15:56:56.330057886 +0000 UTC m=+0.080326506 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-container, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., version=9.4, config_id=kepler, release=1214.1726694543, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, container_name=kepler, name=ubi9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 27 15:56:56 compute-0 nova_compute[185191]: 2026-01-27 15:56:56.549 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:56:59 compute-0 podman[201073]: time="2026-01-27T15:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:56:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:56:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3930 "" "Go-http-client/1.1"
Jan 27 15:57:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:57:00.282 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:57:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:57:00.282 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:57:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:57:00.283 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:57:01 compute-0 nova_compute[185191]: 2026-01-27 15:57:01.016 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:01 compute-0 podman[258825]: 2026-01-27 15:57:01.359232656 +0000 UTC m=+0.105825253 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:57:01 compute-0 openstack_network_exporter[204239]: ERROR   15:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:57:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:57:01 compute-0 openstack_network_exporter[204239]: ERROR   15:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:57:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:57:01 compute-0 nova_compute[185191]: 2026-01-27 15:57:01.552 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:02 compute-0 nova_compute[185191]: 2026-01-27 15:57:02.536 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:57:02 compute-0 nova_compute[185191]: 2026-01-27 15:57:02.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:57:03 compute-0 nova_compute[185191]: 2026-01-27 15:57:03.946 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:57:05 compute-0 nova_compute[185191]: 2026-01-27 15:57:05.940 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:57:05 compute-0 nova_compute[185191]: 2026-01-27 15:57:05.961 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:57:06 compute-0 nova_compute[185191]: 2026-01-27 15:57:06.021 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:06 compute-0 nova_compute[185191]: 2026-01-27 15:57:06.555 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:06 compute-0 nova_compute[185191]: 2026-01-27 15:57:06.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:57:06 compute-0 nova_compute[185191]: 2026-01-27 15:57:06.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:57:07 compute-0 nova_compute[185191]: 2026-01-27 15:57:07.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:57:07 compute-0 nova_compute[185191]: 2026-01-27 15:57:07.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:57:07 compute-0 nova_compute[185191]: 2026-01-27 15:57:07.946 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:57:08 compute-0 nova_compute[185191]: 2026-01-27 15:57:08.029 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:57:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:10.998 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:57:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:10.998 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:57:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:10.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.006 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.007 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.008 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.008 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.008 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.008 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.009 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.009 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.009 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dc6dc080>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.009 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.011 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.011 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.012 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.012 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.013 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.013 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.014 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.014 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.015 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.015 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.015 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.016 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.016 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.018 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.018 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.019 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.019 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.020 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.020 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.021 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.021 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:57:11.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:57:11 compute-0 nova_compute[185191]: 2026-01-27 15:57:11.025 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:11 compute-0 nova_compute[185191]: 2026-01-27 15:57:11.557 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:11 compute-0 nova_compute[185191]: 2026-01-27 15:57:11.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:57:13 compute-0 sshd-session[258850]: Invalid user solana from 45.148.10.240 port 50118
Jan 27 15:57:13 compute-0 sshd-session[258850]: Connection closed by invalid user solana 45.148.10.240 port 50118 [preauth]
Jan 27 15:57:14 compute-0 podman[258852]: 2026-01-27 15:57:14.751492739 +0000 UTC m=+0.061896349 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 27 15:57:16 compute-0 nova_compute[185191]: 2026-01-27 15:57:16.031 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:16 compute-0 podman[258873]: 2026-01-27 15:57:16.329231361 +0000 UTC m=+0.082753771 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=)
Jan 27 15:57:16 compute-0 podman[258871]: 2026-01-27 15:57:16.341908693 +0000 UTC m=+0.100658684 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 15:57:16 compute-0 podman[258872]: 2026-01-27 15:57:16.399761902 +0000 UTC m=+0.145477092 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 15:57:16 compute-0 nova_compute[185191]: 2026-01-27 15:57:16.558 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:21 compute-0 nova_compute[185191]: 2026-01-27 15:57:21.036 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:21 compute-0 nova_compute[185191]: 2026-01-27 15:57:21.559 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:22 compute-0 nova_compute[185191]: 2026-01-27 15:57:22.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:57:24 compute-0 podman[258937]: 2026-01-27 15:57:24.342381824 +0000 UTC m=+0.103324845 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 15:57:26 compute-0 nova_compute[185191]: 2026-01-27 15:57:26.040 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:26 compute-0 nova_compute[185191]: 2026-01-27 15:57:26.561 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:27 compute-0 podman[258958]: 2026-01-27 15:57:27.360818145 +0000 UTC m=+0.113669145 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:57:27 compute-0 podman[258957]: 2026-01-27 15:57:27.433271777 +0000 UTC m=+0.195025867 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, container_name=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, distribution-scope=public, vendor=Red Hat, Inc.)
Jan 27 15:57:29 compute-0 podman[201073]: time="2026-01-27T15:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:57:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:57:29 compute-0 ovn_controller[97541]: 2026-01-27T15:57:29Z|00182|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Jan 27 15:57:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3924 "" "Go-http-client/1.1"
Jan 27 15:57:31 compute-0 nova_compute[185191]: 2026-01-27 15:57:31.045 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:31 compute-0 openstack_network_exporter[204239]: ERROR   15:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:57:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:57:31 compute-0 openstack_network_exporter[204239]: ERROR   15:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:57:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:57:31 compute-0 nova_compute[185191]: 2026-01-27 15:57:31.564 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:31 compute-0 podman[259002]: 2026-01-27 15:57:31.652933551 +0000 UTC m=+0.059747191 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 15:57:36 compute-0 nova_compute[185191]: 2026-01-27 15:57:36.048 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:36 compute-0 nova_compute[185191]: 2026-01-27 15:57:36.567 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:41 compute-0 nova_compute[185191]: 2026-01-27 15:57:41.052 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:41 compute-0 nova_compute[185191]: 2026-01-27 15:57:41.568 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:45 compute-0 podman[259027]: 2026-01-27 15:57:45.307178856 +0000 UTC m=+0.064237532 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:57:46 compute-0 nova_compute[185191]: 2026-01-27 15:57:46.056 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:46 compute-0 nova_compute[185191]: 2026-01-27 15:57:46.569 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:47 compute-0 podman[259046]: 2026-01-27 15:57:47.328739848 +0000 UTC m=+0.087388826 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute)
Jan 27 15:57:47 compute-0 podman[259048]: 2026-01-27 15:57:47.336489067 +0000 UTC m=+0.087517069 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, config_id=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-type=git)
Jan 27 15:57:47 compute-0 podman[259047]: 2026-01-27 15:57:47.363041113 +0000 UTC m=+0.116874651 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:57:51 compute-0 nova_compute[185191]: 2026-01-27 15:57:51.060 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:51 compute-0 nova_compute[185191]: 2026-01-27 15:57:51.572 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:54 compute-0 nova_compute[185191]: 2026-01-27 15:57:54.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:57:54 compute-0 nova_compute[185191]: 2026-01-27 15:57:54.983 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:57:54 compute-0 nova_compute[185191]: 2026-01-27 15:57:54.984 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:57:54 compute-0 nova_compute[185191]: 2026-01-27 15:57:54.984 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:57:54 compute-0 nova_compute[185191]: 2026-01-27 15:57:54.984 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:57:55 compute-0 nova_compute[185191]: 2026-01-27 15:57:55.291 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:57:55 compute-0 nova_compute[185191]: 2026-01-27 15:57:55.292 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5313MB free_disk=72.33986282348633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:57:55 compute-0 nova_compute[185191]: 2026-01-27 15:57:55.292 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:57:55 compute-0 nova_compute[185191]: 2026-01-27 15:57:55.292 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:57:55 compute-0 podman[259112]: 2026-01-27 15:57:55.329056564 +0000 UTC m=+0.088807954 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 15:57:55 compute-0 nova_compute[185191]: 2026-01-27 15:57:55.569 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:57:55 compute-0 nova_compute[185191]: 2026-01-27 15:57:55.570 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:57:55 compute-0 nova_compute[185191]: 2026-01-27 15:57:55.606 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:57:55 compute-0 nova_compute[185191]: 2026-01-27 15:57:55.625 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:57:55 compute-0 nova_compute[185191]: 2026-01-27 15:57:55.627 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:57:55 compute-0 nova_compute[185191]: 2026-01-27 15:57:55.627 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:57:56 compute-0 nova_compute[185191]: 2026-01-27 15:57:56.064 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:56 compute-0 nova_compute[185191]: 2026-01-27 15:57:56.576 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:57:58 compute-0 podman[259132]: 2026-01-27 15:57:58.35476195 +0000 UTC m=+0.113967253 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., architecture=x86_64, release=1214.1726694543, version=9.4, config_id=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=)
Jan 27 15:57:58 compute-0 podman[259133]: 2026-01-27 15:57:58.3718387 +0000 UTC m=+0.112805781 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:57:59 compute-0 podman[201073]: time="2026-01-27T15:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:57:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:57:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3920 "" "Go-http-client/1.1"
Jan 27 15:58:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:58:00.283 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:58:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:58:00.284 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:58:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:58:00.284 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:58:00 compute-0 sshd-session[259172]: Accepted publickey for zuul from 192.168.122.10 port 46016 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 15:58:00 compute-0 systemd-logind[820]: New session 32 of user zuul.
Jan 27 15:58:00 compute-0 systemd[1]: Started Session 32 of User zuul.
Jan 27 15:58:00 compute-0 sshd-session[259172]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 15:58:01 compute-0 nova_compute[185191]: 2026-01-27 15:58:01.070 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:01 compute-0 sudo[259176]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 27 15:58:01 compute-0 sudo[259176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:58:01 compute-0 openstack_network_exporter[204239]: ERROR   15:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:58:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:58:01 compute-0 openstack_network_exporter[204239]: ERROR   15:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:58:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:58:01 compute-0 nova_compute[185191]: 2026-01-27 15:58:01.578 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:02 compute-0 podman[259210]: 2026-01-27 15:58:02.299996028 +0000 UTC m=+0.085926847 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:58:03 compute-0 nova_compute[185191]: 2026-01-27 15:58:03.622 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:58:03 compute-0 nova_compute[185191]: 2026-01-27 15:58:03.624 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:58:04 compute-0 nova_compute[185191]: 2026-01-27 15:58:04.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:58:05 compute-0 nova_compute[185191]: 2026-01-27 15:58:05.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:58:06 compute-0 nova_compute[185191]: 2026-01-27 15:58:06.072 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:06 compute-0 nova_compute[185191]: 2026-01-27 15:58:06.582 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:06 compute-0 ovs-vsctl[259368]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 27 15:58:07 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 259200 (sos)
Jan 27 15:58:07 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 27 15:58:07 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 27 15:58:07 compute-0 virtqemud[184937]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 27 15:58:07 compute-0 virtqemud[184937]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 27 15:58:07 compute-0 virtqemud[184937]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 27 15:58:08 compute-0 nova_compute[185191]: 2026-01-27 15:58:08.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:58:08 compute-0 nova_compute[185191]: 2026-01-27 15:58:08.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:58:09 compute-0 crontab[259791]: (root) LIST (root)
Jan 27 15:58:09 compute-0 nova_compute[185191]: 2026-01-27 15:58:09.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:58:09 compute-0 nova_compute[185191]: 2026-01-27 15:58:09.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:58:09 compute-0 nova_compute[185191]: 2026-01-27 15:58:09.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:58:09 compute-0 nova_compute[185191]: 2026-01-27 15:58:09.995 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:58:11 compute-0 nova_compute[185191]: 2026-01-27 15:58:11.076 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:11 compute-0 nova_compute[185191]: 2026-01-27 15:58:11.584 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:11 compute-0 systemd[1]: Starting Hostname Service...
Jan 27 15:58:11 compute-0 systemd[1]: Started Hostname Service.
Jan 27 15:58:11 compute-0 nova_compute[185191]: 2026-01-27 15:58:11.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:58:16 compute-0 nova_compute[185191]: 2026-01-27 15:58:16.078 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:16 compute-0 podman[260309]: 2026-01-27 15:58:16.311468641 +0000 UTC m=+0.067407098 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 27 15:58:16 compute-0 nova_compute[185191]: 2026-01-27 15:58:16.587 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:18 compute-0 podman[260654]: 2026-01-27 15:58:18.356558778 +0000 UTC m=+0.102953095 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true)
Jan 27 15:58:18 compute-0 podman[260668]: 2026-01-27 15:58:18.370768451 +0000 UTC m=+0.091564618 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64)
Jan 27 15:58:18 compute-0 podman[260659]: 2026-01-27 15:58:18.387688638 +0000 UTC m=+0.124884957 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller)
Jan 27 15:58:20 compute-0 ovs-appctl[261156]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 27 15:58:20 compute-0 ovs-appctl[261161]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 27 15:58:20 compute-0 ovs-appctl[261166]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 27 15:58:21 compute-0 nova_compute[185191]: 2026-01-27 15:58:21.080 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:21 compute-0 nova_compute[185191]: 2026-01-27 15:58:21.587 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:24 compute-0 nova_compute[185191]: 2026-01-27 15:58:24.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:58:25 compute-0 podman[262148]: 2026-01-27 15:58:25.644491085 +0000 UTC m=+0.083441520 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 15:58:26 compute-0 nova_compute[185191]: 2026-01-27 15:58:26.084 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:26 compute-0 nova_compute[185191]: 2026-01-27 15:58:26.589 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:28 compute-0 podman[262229]: 2026-01-27 15:58:28.494885347 +0000 UTC m=+0.087247043 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 15:58:28 compute-0 podman[262228]: 2026-01-27 15:58:28.49650268 +0000 UTC m=+0.090702175 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, release=1214.1726694543, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9)
Jan 27 15:58:28 compute-0 virtqemud[184937]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 27 15:58:29 compute-0 podman[201073]: time="2026-01-27T15:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:58:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:58:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3927 "" "Go-http-client/1.1"
Jan 27 15:58:29 compute-0 systemd[1]: Starting Time & Date Service...
Jan 27 15:58:30 compute-0 systemd[1]: Started Time & Date Service.
Jan 27 15:58:31 compute-0 nova_compute[185191]: 2026-01-27 15:58:31.090 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:31 compute-0 openstack_network_exporter[204239]: ERROR   15:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:58:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:58:31 compute-0 openstack_network_exporter[204239]: ERROR   15:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:58:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:58:31 compute-0 nova_compute[185191]: 2026-01-27 15:58:31.590 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:33 compute-0 podman[262677]: 2026-01-27 15:58:33.328922428 +0000 UTC m=+0.081514267 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:58:36 compute-0 nova_compute[185191]: 2026-01-27 15:58:36.095 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:36 compute-0 nova_compute[185191]: 2026-01-27 15:58:36.593 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:41 compute-0 nova_compute[185191]: 2026-01-27 15:58:41.102 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:41 compute-0 nova_compute[185191]: 2026-01-27 15:58:41.596 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:46 compute-0 nova_compute[185191]: 2026-01-27 15:58:46.106 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:46 compute-0 nova_compute[185191]: 2026-01-27 15:58:46.598 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:47 compute-0 podman[262700]: 2026-01-27 15:58:47.308146923 +0000 UTC m=+0.064103899 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 15:58:49 compute-0 podman[262717]: 2026-01-27 15:58:49.241231711 +0000 UTC m=+0.074116839 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:58:49 compute-0 podman[262719]: 2026-01-27 15:58:49.243591074 +0000 UTC m=+0.070503601 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_id=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.6)
Jan 27 15:58:49 compute-0 podman[262718]: 2026-01-27 15:58:49.303459598 +0000 UTC m=+0.130063926 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 27 15:58:50 compute-0 sudo[259176]: pam_unix(sudo:session): session closed for user root
Jan 27 15:58:50 compute-0 sshd-session[259175]: Received disconnect from 192.168.122.10 port 46016:11: disconnected by user
Jan 27 15:58:50 compute-0 sshd-session[259175]: Disconnected from user zuul 192.168.122.10 port 46016
Jan 27 15:58:50 compute-0 sshd-session[259172]: pam_unix(sshd:session): session closed for user zuul
Jan 27 15:58:50 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Jan 27 15:58:50 compute-0 systemd-logind[820]: Session 32 logged out. Waiting for processes to exit.
Jan 27 15:58:50 compute-0 systemd[1]: session-32.scope: Consumed 1min 30.802s CPU time, 593.5M memory peak, read 196.1M from disk, written 1.6M to disk.
Jan 27 15:58:50 compute-0 systemd-logind[820]: Removed session 32.
Jan 27 15:58:50 compute-0 sshd-session[262780]: Accepted publickey for zuul from 192.168.122.10 port 58262 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 15:58:50 compute-0 systemd-logind[820]: New session 33 of user zuul.
Jan 27 15:58:50 compute-0 systemd[1]: Started Session 33 of User zuul.
Jan 27 15:58:50 compute-0 sshd-session[262780]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 15:58:50 compute-0 sudo[262784]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2026-01-27-xrlokkx.tar.xz
Jan 27 15:58:50 compute-0 sudo[262784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:58:50 compute-0 sudo[262784]: pam_unix(sudo:session): session closed for user root
Jan 27 15:58:50 compute-0 sshd-session[262783]: Received disconnect from 192.168.122.10 port 58262:11: disconnected by user
Jan 27 15:58:50 compute-0 sshd-session[262783]: Disconnected from user zuul 192.168.122.10 port 58262
Jan 27 15:58:50 compute-0 sshd-session[262780]: pam_unix(sshd:session): session closed for user zuul
Jan 27 15:58:50 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Jan 27 15:58:50 compute-0 systemd-logind[820]: Session 33 logged out. Waiting for processes to exit.
Jan 27 15:58:50 compute-0 systemd-logind[820]: Removed session 33.
Jan 27 15:58:50 compute-0 sshd-session[262809]: Accepted publickey for zuul from 192.168.122.10 port 58276 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 15:58:50 compute-0 systemd-logind[820]: New session 34 of user zuul.
Jan 27 15:58:50 compute-0 systemd[1]: Started Session 34 of User zuul.
Jan 27 15:58:50 compute-0 sshd-session[262809]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 15:58:50 compute-0 sudo[262813]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Jan 27 15:58:50 compute-0 sudo[262813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 15:58:50 compute-0 sudo[262813]: pam_unix(sudo:session): session closed for user root
Jan 27 15:58:50 compute-0 sshd-session[262812]: Received disconnect from 192.168.122.10 port 58276:11: disconnected by user
Jan 27 15:58:50 compute-0 sshd-session[262812]: Disconnected from user zuul 192.168.122.10 port 58276
Jan 27 15:58:50 compute-0 sshd-session[262809]: pam_unix(sshd:session): session closed for user zuul
Jan 27 15:58:50 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Jan 27 15:58:50 compute-0 systemd-logind[820]: Session 34 logged out. Waiting for processes to exit.
Jan 27 15:58:50 compute-0 systemd-logind[820]: Removed session 34.
Jan 27 15:58:51 compute-0 nova_compute[185191]: 2026-01-27 15:58:51.110 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:51 compute-0 nova_compute[185191]: 2026-01-27 15:58:51.600 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:55 compute-0 nova_compute[185191]: 2026-01-27 15:58:55.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:58:55 compute-0 nova_compute[185191]: 2026-01-27 15:58:55.992 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:58:55 compute-0 nova_compute[185191]: 2026-01-27 15:58:55.993 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:58:55 compute-0 nova_compute[185191]: 2026-01-27 15:58:55.993 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:58:55 compute-0 nova_compute[185191]: 2026-01-27 15:58:55.994 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:58:56 compute-0 nova_compute[185191]: 2026-01-27 15:58:56.115 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:56 compute-0 nova_compute[185191]: 2026-01-27 15:58:56.322 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:58:56 compute-0 nova_compute[185191]: 2026-01-27 15:58:56.322 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4919MB free_disk=72.33932495117188GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:58:56 compute-0 nova_compute[185191]: 2026-01-27 15:58:56.323 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:58:56 compute-0 nova_compute[185191]: 2026-01-27 15:58:56.323 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:58:56 compute-0 podman[262838]: 2026-01-27 15:58:56.352203348 +0000 UTC m=+0.104895068 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 15:58:56 compute-0 nova_compute[185191]: 2026-01-27 15:58:56.603 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:58:57 compute-0 nova_compute[185191]: 2026-01-27 15:58:57.328 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:58:57 compute-0 nova_compute[185191]: 2026-01-27 15:58:57.328 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:58:57 compute-0 nova_compute[185191]: 2026-01-27 15:58:57.727 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:58:57 compute-0 nova_compute[185191]: 2026-01-27 15:58:57.757 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:58:57 compute-0 nova_compute[185191]: 2026-01-27 15:58:57.758 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:58:57 compute-0 nova_compute[185191]: 2026-01-27 15:58:57.759 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:58:59 compute-0 podman[262858]: 2026-01-27 15:58:59.328994695 +0000 UTC m=+0.083789049 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, distribution-scope=public, build-date=2024-09-18T21:23:30, container_name=kepler, io.buildah.version=1.29.0, version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=kepler, io.openshift.tags=base rhel9)
Jan 27 15:58:59 compute-0 podman[262859]: 2026-01-27 15:58:59.337956577 +0000 UTC m=+0.093835460 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 15:58:59 compute-0 podman[201073]: time="2026-01-27T15:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:58:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:58:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3924 "" "Go-http-client/1.1"
Jan 27 15:59:00 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 27 15:59:00 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 15:59:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:59:00.284 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:59:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:59:00.285 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:59:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 15:59:00.285 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:59:01 compute-0 nova_compute[185191]: 2026-01-27 15:59:01.119 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:01 compute-0 openstack_network_exporter[204239]: ERROR   15:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:59:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:59:01 compute-0 openstack_network_exporter[204239]: ERROR   15:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:59:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:59:01 compute-0 nova_compute[185191]: 2026-01-27 15:59:01.603 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:04 compute-0 podman[262903]: 2026-01-27 15:59:04.312227527 +0000 UTC m=+0.070457939 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 15:59:05 compute-0 nova_compute[185191]: 2026-01-27 15:59:05.754 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:59:05 compute-0 nova_compute[185191]: 2026-01-27 15:59:05.754 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:59:05 compute-0 nova_compute[185191]: 2026-01-27 15:59:05.755 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:59:06 compute-0 nova_compute[185191]: 2026-01-27 15:59:06.123 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:06 compute-0 nova_compute[185191]: 2026-01-27 15:59:06.606 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:07 compute-0 nova_compute[185191]: 2026-01-27 15:59:07.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:59:08 compute-0 nova_compute[185191]: 2026-01-27 15:59:08.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:59:08 compute-0 nova_compute[185191]: 2026-01-27 15:59:08.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 15:59:09 compute-0 nova_compute[185191]: 2026-01-27 15:59:09.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:59:09 compute-0 nova_compute[185191]: 2026-01-27 15:59:09.946 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 15:59:09 compute-0 nova_compute[185191]: 2026-01-27 15:59:09.948 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 15:59:09 compute-0 nova_compute[185191]: 2026-01-27 15:59:09.971 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 15:59:10 compute-0 nova_compute[185191]: 2026-01-27 15:59:10.966 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:59:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:10.998 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:10.999 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.003 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.003 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.004 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.004 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.004 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.004 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.006 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.006 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.006 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.007 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.007 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.007 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.008 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.009 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.009 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.009 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.008 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.010 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.010 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.011 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.011 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.011 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.011 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.012 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.012 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.013 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.013 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.014 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.014 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.015 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.015 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.015 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.016 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.016 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.015 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02dbc36960>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.018 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.018 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.019 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.019 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.019 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.019 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.019 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.019 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.020 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.020 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.020 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.020 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.020 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.020 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.020 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.020 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.021 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.021 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.021 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.021 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.021 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.021 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.021 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.021 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.021 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 15:59:11.021 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 15:59:11 compute-0 nova_compute[185191]: 2026-01-27 15:59:11.127 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:11 compute-0 nova_compute[185191]: 2026-01-27 15:59:11.608 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:12 compute-0 nova_compute[185191]: 2026-01-27 15:59:12.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:59:16 compute-0 nova_compute[185191]: 2026-01-27 15:59:16.130 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:16 compute-0 nova_compute[185191]: 2026-01-27 15:59:16.610 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:18 compute-0 podman[262927]: 2026-01-27 15:59:18.328065489 +0000 UTC m=+0.070433039 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 15:59:20 compute-0 podman[262948]: 2026-01-27 15:59:20.318002489 +0000 UTC m=+0.070278045 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1755695350, vendor=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 15:59:20 compute-0 podman[262946]: 2026-01-27 15:59:20.326234601 +0000 UTC m=+0.087250553 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126)
Jan 27 15:59:20 compute-0 podman[262947]: 2026-01-27 15:59:20.401565921 +0000 UTC m=+0.147329062 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 15:59:21 compute-0 nova_compute[185191]: 2026-01-27 15:59:21.132 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:21 compute-0 nova_compute[185191]: 2026-01-27 15:59:21.613 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:26 compute-0 nova_compute[185191]: 2026-01-27 15:59:26.136 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:26 compute-0 nova_compute[185191]: 2026-01-27 15:59:26.615 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:26 compute-0 nova_compute[185191]: 2026-01-27 15:59:26.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:59:27 compute-0 podman[263007]: 2026-01-27 15:59:27.364400936 +0000 UTC m=+0.117999401 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Jan 27 15:59:29 compute-0 podman[201073]: time="2026-01-27T15:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:59:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:59:29 compute-0 podman[201073]: @ - - [27/Jan/2026:15:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3922 "" "Go-http-client/1.1"
Jan 27 15:59:30 compute-0 podman[263029]: 2026-01-27 15:59:30.319569322 +0000 UTC m=+0.072656370 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 15:59:30 compute-0 podman[263028]: 2026-01-27 15:59:30.330864476 +0000 UTC m=+0.087826608 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., name=ubi9, version=9.4, managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, architecture=x86_64, vcs-type=git, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container)
Jan 27 15:59:30 compute-0 sshd-session[263026]: Invalid user sol from 45.148.10.240 port 51014
Jan 27 15:59:30 compute-0 sshd-session[263026]: Connection closed by invalid user sol 45.148.10.240 port 51014 [preauth]
Jan 27 15:59:31 compute-0 nova_compute[185191]: 2026-01-27 15:59:31.141 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:31 compute-0 openstack_network_exporter[204239]: ERROR   15:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 15:59:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:59:31 compute-0 openstack_network_exporter[204239]: ERROR   15:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 15:59:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 15:59:31 compute-0 nova_compute[185191]: 2026-01-27 15:59:31.618 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:35 compute-0 podman[263069]: 2026-01-27 15:59:35.309705801 +0000 UTC m=+0.064889550 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 15:59:36 compute-0 nova_compute[185191]: 2026-01-27 15:59:36.145 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:36 compute-0 nova_compute[185191]: 2026-01-27 15:59:36.625 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:41 compute-0 nova_compute[185191]: 2026-01-27 15:59:41.149 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:41 compute-0 nova_compute[185191]: 2026-01-27 15:59:41.626 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:46 compute-0 nova_compute[185191]: 2026-01-27 15:59:46.153 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:46 compute-0 nova_compute[185191]: 2026-01-27 15:59:46.628 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:49 compute-0 podman[263092]: 2026-01-27 15:59:49.315743775 +0000 UTC m=+0.069031821 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 15:59:51 compute-0 nova_compute[185191]: 2026-01-27 15:59:51.157 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:51 compute-0 podman[263111]: 2026-01-27 15:59:51.32398757 +0000 UTC m=+0.083569344 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Jan 27 15:59:51 compute-0 podman[263113]: 2026-01-27 15:59:51.333357222 +0000 UTC m=+0.084864308 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, version=9.6, distribution-scope=public, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 27 15:59:51 compute-0 podman[263112]: 2026-01-27 15:59:51.37520243 +0000 UTC m=+0.131770122 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 15:59:51 compute-0 nova_compute[185191]: 2026-01-27 15:59:51.630 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:56 compute-0 nova_compute[185191]: 2026-01-27 15:59:56.161 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:56 compute-0 nova_compute[185191]: 2026-01-27 15:59:56.635 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 15:59:57 compute-0 nova_compute[185191]: 2026-01-27 15:59:57.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 15:59:57 compute-0 nova_compute[185191]: 2026-01-27 15:59:57.993 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:59:57 compute-0 nova_compute[185191]: 2026-01-27 15:59:57.994 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:59:57 compute-0 nova_compute[185191]: 2026-01-27 15:59:57.994 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:59:57 compute-0 nova_compute[185191]: 2026-01-27 15:59:57.994 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 15:59:58 compute-0 nova_compute[185191]: 2026-01-27 15:59:58.314 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 15:59:58 compute-0 nova_compute[185191]: 2026-01-27 15:59:58.316 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5203MB free_disk=72.339599609375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 15:59:58 compute-0 nova_compute[185191]: 2026-01-27 15:59:58.317 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 15:59:58 compute-0 nova_compute[185191]: 2026-01-27 15:59:58.318 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 15:59:58 compute-0 podman[263176]: 2026-01-27 15:59:58.324202202 +0000 UTC m=+0.082912105 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 15:59:59 compute-0 nova_compute[185191]: 2026-01-27 15:59:59.135 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 15:59:59 compute-0 nova_compute[185191]: 2026-01-27 15:59:59.137 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 15:59:59 compute-0 nova_compute[185191]: 2026-01-27 15:59:59.179 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 15:59:59 compute-0 nova_compute[185191]: 2026-01-27 15:59:59.208 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 15:59:59 compute-0 nova_compute[185191]: 2026-01-27 15:59:59.211 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 15:59:59 compute-0 nova_compute[185191]: 2026-01-27 15:59:59.212 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 15:59:59 compute-0 podman[201073]: time="2026-01-27T15:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 15:59:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 15:59:59 compute-0 podman[201073]: @ - - [27/Jan/2026:15:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3929 "" "Go-http-client/1.1"
Jan 27 16:00:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:00:00.285 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:00:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:00:00.286 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:00:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:00:00.286 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:00:01 compute-0 nova_compute[185191]: 2026-01-27 16:00:01.165 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:01 compute-0 podman[263195]: 2026-01-27 16:00:01.310101225 +0000 UTC m=+0.067504370 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.expose-services=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, release=1214.1726694543, managed_by=edpm_ansible, io.openshift.tags=base rhel9, distribution-scope=public, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30)
Jan 27 16:00:01 compute-0 podman[263196]: 2026-01-27 16:00:01.333326471 +0000 UTC m=+0.086152743 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 16:00:01 compute-0 openstack_network_exporter[204239]: ERROR   16:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:00:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:00:01 compute-0 openstack_network_exporter[204239]: ERROR   16:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:00:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:00:01 compute-0 nova_compute[185191]: 2026-01-27 16:00:01.638 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:06 compute-0 nova_compute[185191]: 2026-01-27 16:00:06.170 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:06 compute-0 nova_compute[185191]: 2026-01-27 16:00:06.207 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:00:06 compute-0 nova_compute[185191]: 2026-01-27 16:00:06.207 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:00:06 compute-0 podman[263234]: 2026-01-27 16:00:06.311624061 +0000 UTC m=+0.067870700 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 16:00:06 compute-0 nova_compute[185191]: 2026-01-27 16:00:06.639 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:06 compute-0 nova_compute[185191]: 2026-01-27 16:00:06.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:00:09 compute-0 nova_compute[185191]: 2026-01-27 16:00:09.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:00:09 compute-0 nova_compute[185191]: 2026-01-27 16:00:09.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:00:09 compute-0 nova_compute[185191]: 2026-01-27 16:00:09.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 16:00:10 compute-0 nova_compute[185191]: 2026-01-27 16:00:10.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:00:10 compute-0 nova_compute[185191]: 2026-01-27 16:00:10.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 16:00:10 compute-0 nova_compute[185191]: 2026-01-27 16:00:10.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 16:00:10 compute-0 nova_compute[185191]: 2026-01-27 16:00:10.967 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 16:00:10 compute-0 nova_compute[185191]: 2026-01-27 16:00:10.968 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:00:11 compute-0 nova_compute[185191]: 2026-01-27 16:00:11.174 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:11 compute-0 nova_compute[185191]: 2026-01-27 16:00:11.642 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:12 compute-0 nova_compute[185191]: 2026-01-27 16:00:12.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:00:16 compute-0 nova_compute[185191]: 2026-01-27 16:00:16.179 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:16 compute-0 nova_compute[185191]: 2026-01-27 16:00:16.645 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:20 compute-0 podman[263257]: 2026-01-27 16:00:20.297552495 +0000 UTC m=+0.057356037 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 16:00:21 compute-0 nova_compute[185191]: 2026-01-27 16:00:21.184 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:21 compute-0 nova_compute[185191]: 2026-01-27 16:00:21.647 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:22 compute-0 podman[263276]: 2026-01-27 16:00:22.324306688 +0000 UTC m=+0.078261910 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20260126)
Jan 27 16:00:22 compute-0 podman[263278]: 2026-01-27 16:00:22.366878436 +0000 UTC m=+0.109533214 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9)
Jan 27 16:00:22 compute-0 podman[263277]: 2026-01-27 16:00:22.371683525 +0000 UTC m=+0.118304300 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 16:00:26 compute-0 nova_compute[185191]: 2026-01-27 16:00:26.187 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:26 compute-0 nova_compute[185191]: 2026-01-27 16:00:26.651 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:26 compute-0 nova_compute[185191]: 2026-01-27 16:00:26.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:00:29 compute-0 podman[263339]: 2026-01-27 16:00:29.312444296 +0000 UTC m=+0.073212364 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 16:00:29 compute-0 podman[201073]: time="2026-01-27T16:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:00:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:00:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3923 "" "Go-http-client/1.1"
Jan 27 16:00:31 compute-0 nova_compute[185191]: 2026-01-27 16:00:31.192 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:31 compute-0 openstack_network_exporter[204239]: ERROR   16:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:00:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:00:31 compute-0 openstack_network_exporter[204239]: ERROR   16:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:00:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:00:31 compute-0 nova_compute[185191]: 2026-01-27 16:00:31.651 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:31 compute-0 podman[263359]: 2026-01-27 16:00:31.744877022 +0000 UTC m=+0.068535488 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, container_name=kepler, release=1214.1726694543, managed_by=edpm_ansible, config_id=kepler, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Jan 27 16:00:31 compute-0 podman[263360]: 2026-01-27 16:00:31.745759536 +0000 UTC m=+0.066968866 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 16:00:36 compute-0 nova_compute[185191]: 2026-01-27 16:00:36.196 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:36 compute-0 nova_compute[185191]: 2026-01-27 16:00:36.654 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:37 compute-0 podman[263403]: 2026-01-27 16:00:37.297875542 +0000 UTC m=+0.059588906 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 16:00:41 compute-0 nova_compute[185191]: 2026-01-27 16:00:41.200 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:41 compute-0 nova_compute[185191]: 2026-01-27 16:00:41.656 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:46 compute-0 nova_compute[185191]: 2026-01-27 16:00:46.205 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:46 compute-0 nova_compute[185191]: 2026-01-27 16:00:46.658 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:51 compute-0 nova_compute[185191]: 2026-01-27 16:00:51.208 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:51 compute-0 podman[263428]: 2026-01-27 16:00:51.31815522 +0000 UTC m=+0.074607162 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 16:00:51 compute-0 nova_compute[185191]: 2026-01-27 16:00:51.661 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:53 compute-0 podman[263445]: 2026-01-27 16:00:53.328676185 +0000 UTC m=+0.080253504 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 16:00:53 compute-0 podman[263447]: 2026-01-27 16:00:53.354150221 +0000 UTC m=+0.097619012 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7)
Jan 27 16:00:53 compute-0 podman[263446]: 2026-01-27 16:00:53.393943364 +0000 UTC m=+0.140881708 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 16:00:56 compute-0 nova_compute[185191]: 2026-01-27 16:00:56.212 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:56 compute-0 nova_compute[185191]: 2026-01-27 16:00:56.663 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:00:58 compute-0 nova_compute[185191]: 2026-01-27 16:00:58.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:00:58 compute-0 nova_compute[185191]: 2026-01-27 16:00:58.986 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:00:58 compute-0 nova_compute[185191]: 2026-01-27 16:00:58.986 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:00:58 compute-0 nova_compute[185191]: 2026-01-27 16:00:58.987 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:00:58 compute-0 nova_compute[185191]: 2026-01-27 16:00:58.987 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 16:00:59 compute-0 nova_compute[185191]: 2026-01-27 16:00:59.320 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 16:00:59 compute-0 nova_compute[185191]: 2026-01-27 16:00:59.322 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5257MB free_disk=72.339599609375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 16:00:59 compute-0 nova_compute[185191]: 2026-01-27 16:00:59.322 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:00:59 compute-0 nova_compute[185191]: 2026-01-27 16:00:59.323 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:00:59 compute-0 nova_compute[185191]: 2026-01-27 16:00:59.446 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 16:00:59 compute-0 nova_compute[185191]: 2026-01-27 16:00:59.446 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 16:00:59 compute-0 nova_compute[185191]: 2026-01-27 16:00:59.482 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 16:00:59 compute-0 nova_compute[185191]: 2026-01-27 16:00:59.559 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 16:00:59 compute-0 nova_compute[185191]: 2026-01-27 16:00:59.561 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 16:00:59 compute-0 nova_compute[185191]: 2026-01-27 16:00:59.561 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:00:59 compute-0 podman[201073]: time="2026-01-27T16:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:00:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:00:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3927 "" "Go-http-client/1.1"
Jan 27 16:01:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:01:00.287 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:01:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:01:00.287 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:01:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:01:00.287 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:01:00 compute-0 podman[263508]: 2026-01-27 16:01:00.31378301 +0000 UTC m=+0.067759417 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 16:01:01 compute-0 nova_compute[185191]: 2026-01-27 16:01:01.218 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:01 compute-0 openstack_network_exporter[204239]: ERROR   16:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:01:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:01:01 compute-0 openstack_network_exporter[204239]: ERROR   16:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:01:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:01:01 compute-0 CROND[263529]: (root) CMD (run-parts /etc/cron.hourly)
Jan 27 16:01:01 compute-0 run-parts[263532]: (/etc/cron.hourly) starting 0anacron
Jan 27 16:01:01 compute-0 run-parts[263538]: (/etc/cron.hourly) finished 0anacron
Jan 27 16:01:01 compute-0 CROND[263528]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 27 16:01:01 compute-0 nova_compute[185191]: 2026-01-27 16:01:01.664 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:02 compute-0 podman[263540]: 2026-01-27 16:01:02.311374467 +0000 UTC m=+0.067761007 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 16:01:02 compute-0 podman[263539]: 2026-01-27 16:01:02.328947111 +0000 UTC m=+0.081327613 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Jan 27 16:01:02 compute-0 nova_compute[185191]: 2026-01-27 16:01:02.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:01:04 compute-0 nova_compute[185191]: 2026-01-27 16:01:04.958 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:01:06 compute-0 nova_compute[185191]: 2026-01-27 16:01:06.221 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:06 compute-0 nova_compute[185191]: 2026-01-27 16:01:06.666 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:06 compute-0 nova_compute[185191]: 2026-01-27 16:01:06.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:01:07 compute-0 nova_compute[185191]: 2026-01-27 16:01:07.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:01:08 compute-0 podman[263580]: 2026-01-27 16:01:08.323291534 +0000 UTC m=+0.077823599 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 16:01:09 compute-0 nova_compute[185191]: 2026-01-27 16:01:09.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:01:09 compute-0 nova_compute[185191]: 2026-01-27 16:01:09.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 16:01:10 compute-0 nova_compute[185191]: 2026-01-27 16:01:10.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:01:10 compute-0 nova_compute[185191]: 2026-01-27 16:01:10.968 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:01:10 compute-0 nova_compute[185191]: 2026-01-27 16:01:10.968 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 16:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:10.999 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 16:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:10.999 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 16:01:10 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:10.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.004 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.004 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.004 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.006 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.006 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.006 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.006 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.007 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.007 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.007 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.007 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.007 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.007 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.008 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.008 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.008 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.008 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.008 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.009 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.009 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.009 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.009 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.009 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.009 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.009 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.009 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.009 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.010 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.011 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.011 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:01:11.011 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:01:11 compute-0 nova_compute[185191]: 2026-01-27 16:01:11.227 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:11 compute-0 nova_compute[185191]: 2026-01-27 16:01:11.667 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:11 compute-0 nova_compute[185191]: 2026-01-27 16:01:11.955 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:01:12 compute-0 nova_compute[185191]: 2026-01-27 16:01:12.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:01:12 compute-0 nova_compute[185191]: 2026-01-27 16:01:12.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 16:01:12 compute-0 nova_compute[185191]: 2026-01-27 16:01:12.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 16:01:12 compute-0 nova_compute[185191]: 2026-01-27 16:01:12.976 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 16:01:13 compute-0 nova_compute[185191]: 2026-01-27 16:01:13.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:01:16 compute-0 nova_compute[185191]: 2026-01-27 16:01:16.231 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:16 compute-0 nova_compute[185191]: 2026-01-27 16:01:16.671 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:19 compute-0 nova_compute[185191]: 2026-01-27 16:01:19.770 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:01:20 compute-0 nova_compute[185191]: 2026-01-27 16:01:20.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:01:20 compute-0 nova_compute[185191]: 2026-01-27 16:01:20.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 16:01:20 compute-0 nova_compute[185191]: 2026-01-27 16:01:20.963 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 16:01:21 compute-0 nova_compute[185191]: 2026-01-27 16:01:21.239 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:21 compute-0 nova_compute[185191]: 2026-01-27 16:01:21.673 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:22 compute-0 podman[263605]: 2026-01-27 16:01:22.362927157 +0000 UTC m=+0.105687010 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 16:01:24 compute-0 podman[263623]: 2026-01-27 16:01:24.323144217 +0000 UTC m=+0.079585746 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052)
Jan 27 16:01:24 compute-0 podman[263625]: 2026-01-27 16:01:24.335927951 +0000 UTC m=+0.079971876 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6)
Jan 27 16:01:24 compute-0 podman[263624]: 2026-01-27 16:01:24.389095164 +0000 UTC m=+0.142133571 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 16:01:26 compute-0 nova_compute[185191]: 2026-01-27 16:01:26.243 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:26 compute-0 nova_compute[185191]: 2026-01-27 16:01:26.674 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:27 compute-0 nova_compute[185191]: 2026-01-27 16:01:27.963 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:01:29 compute-0 podman[201073]: time="2026-01-27T16:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:01:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:01:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3929 "" "Go-http-client/1.1"
Jan 27 16:01:31 compute-0 nova_compute[185191]: 2026-01-27 16:01:31.247 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:31 compute-0 podman[263684]: 2026-01-27 16:01:31.313445862 +0000 UTC m=+0.074291533 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi)
Jan 27 16:01:31 compute-0 openstack_network_exporter[204239]: ERROR   16:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:01:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:01:31 compute-0 openstack_network_exporter[204239]: ERROR   16:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:01:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:01:31 compute-0 nova_compute[185191]: 2026-01-27 16:01:31.677 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:33 compute-0 podman[263705]: 2026-01-27 16:01:33.297951536 +0000 UTC m=+0.056216276 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 16:01:33 compute-0 podman[263704]: 2026-01-27 16:01:33.314277376 +0000 UTC m=+0.076095421 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-type=git, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, architecture=x86_64, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container)
Jan 27 16:01:36 compute-0 nova_compute[185191]: 2026-01-27 16:01:36.251 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:36 compute-0 nova_compute[185191]: 2026-01-27 16:01:36.680 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:39 compute-0 podman[263747]: 2026-01-27 16:01:39.326182204 +0000 UTC m=+0.083566613 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 16:01:41 compute-0 nova_compute[185191]: 2026-01-27 16:01:41.254 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:41 compute-0 nova_compute[185191]: 2026-01-27 16:01:41.682 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:46 compute-0 nova_compute[185191]: 2026-01-27 16:01:46.258 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:46 compute-0 sshd-session[263771]: Invalid user sol from 45.148.10.240 port 42076
Jan 27 16:01:46 compute-0 sshd-session[263771]: Connection closed by invalid user sol 45.148.10.240 port 42076 [preauth]
Jan 27 16:01:46 compute-0 nova_compute[185191]: 2026-01-27 16:01:46.690 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:51 compute-0 nova_compute[185191]: 2026-01-27 16:01:51.263 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:51 compute-0 nova_compute[185191]: 2026-01-27 16:01:51.694 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:52 compute-0 podman[201073]: time="2026-01-27T16:01:52Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:01:52 compute-0 podman[201073]: @ - - [27/Jan/2026:16:01:52 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 27627 "" "Go-http-client/1.1"
Jan 27 16:01:53 compute-0 podman[263774]: 2026-01-27 16:01:53.296378435 +0000 UTC m=+0.055524697 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 16:01:55 compute-0 podman[263793]: 2026-01-27 16:01:55.344574085 +0000 UTC m=+0.101342592 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 27 16:01:55 compute-0 podman[263795]: 2026-01-27 16:01:55.353578388 +0000 UTC m=+0.091334173 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 16:01:55 compute-0 podman[263794]: 2026-01-27 16:01:55.384982424 +0000 UTC m=+0.134435974 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 16:01:56 compute-0 nova_compute[185191]: 2026-01-27 16:01:56.267 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:56 compute-0 nova_compute[185191]: 2026-01-27 16:01:56.697 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:01:58 compute-0 nova_compute[185191]: 2026-01-27 16:01:58.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:01:58 compute-0 nova_compute[185191]: 2026-01-27 16:01:58.974 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:01:58 compute-0 nova_compute[185191]: 2026-01-27 16:01:58.974 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:01:58 compute-0 nova_compute[185191]: 2026-01-27 16:01:58.974 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:01:58 compute-0 nova_compute[185191]: 2026-01-27 16:01:58.974 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 16:01:59 compute-0 nova_compute[185191]: 2026-01-27 16:01:59.263 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 16:01:59 compute-0 nova_compute[185191]: 2026-01-27 16:01:59.264 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5271MB free_disk=72.3396987915039GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 16:01:59 compute-0 nova_compute[185191]: 2026-01-27 16:01:59.264 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:01:59 compute-0 nova_compute[185191]: 2026-01-27 16:01:59.264 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:01:59 compute-0 nova_compute[185191]: 2026-01-27 16:01:59.327 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 16:01:59 compute-0 nova_compute[185191]: 2026-01-27 16:01:59.328 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 16:01:59 compute-0 nova_compute[185191]: 2026-01-27 16:01:59.343 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing inventories for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 16:01:59 compute-0 nova_compute[185191]: 2026-01-27 16:01:59.365 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating ProviderTree inventory for provider dbf037fd-3291-487b-ae9c-69178dae2ebc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 16:01:59 compute-0 nova_compute[185191]: 2026-01-27 16:01:59.366 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 16:01:59 compute-0 nova_compute[185191]: 2026-01-27 16:01:59.382 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing aggregate associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 16:01:59 compute-0 nova_compute[185191]: 2026-01-27 16:01:59.406 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing trait associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, traits: HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_ACCELERATORS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 16:01:59 compute-0 nova_compute[185191]: 2026-01-27 16:01:59.429 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 16:01:59 compute-0 nova_compute[185191]: 2026-01-27 16:01:59.448 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 16:01:59 compute-0 nova_compute[185191]: 2026-01-27 16:01:59.449 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 16:01:59 compute-0 nova_compute[185191]: 2026-01-27 16:01:59.449 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:01:59 compute-0 podman[201073]: time="2026-01-27T16:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:01:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:01:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3931 "" "Go-http-client/1.1"
Jan 27 16:02:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:02:00.288 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:02:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:02:00.288 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:02:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:02:00.288 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:02:01 compute-0 nova_compute[185191]: 2026-01-27 16:02:01.271 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:01 compute-0 openstack_network_exporter[204239]: ERROR   16:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:02:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:02:01 compute-0 openstack_network_exporter[204239]: ERROR   16:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:02:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:02:01 compute-0 nova_compute[185191]: 2026-01-27 16:02:01.699 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:01 compute-0 podman[263855]: 2026-01-27 16:02:01.810343383 +0000 UTC m=+0.074889950 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 27 16:02:04 compute-0 podman[263876]: 2026-01-27 16:02:04.32884736 +0000 UTC m=+0.079853242 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 16:02:04 compute-0 podman[263875]: 2026-01-27 16:02:04.340279308 +0000 UTC m=+0.084904498 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.openshift.tags=base rhel9, config_id=kepler, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, distribution-scope=public, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, name=ubi9, build-date=2024-09-18T21:23:30)
Jan 27 16:02:06 compute-0 nova_compute[185191]: 2026-01-27 16:02:06.275 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:06 compute-0 nova_compute[185191]: 2026-01-27 16:02:06.445 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:02:06 compute-0 nova_compute[185191]: 2026-01-27 16:02:06.701 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:07 compute-0 nova_compute[185191]: 2026-01-27 16:02:07.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:02:07 compute-0 nova_compute[185191]: 2026-01-27 16:02:07.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:02:09 compute-0 nova_compute[185191]: 2026-01-27 16:02:09.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:02:09 compute-0 nova_compute[185191]: 2026-01-27 16:02:09.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 16:02:10 compute-0 podman[263917]: 2026-01-27 16:02:10.312802552 +0000 UTC m=+0.061360374 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 16:02:11 compute-0 nova_compute[185191]: 2026-01-27 16:02:11.280 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:11 compute-0 nova_compute[185191]: 2026-01-27 16:02:11.703 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:12 compute-0 nova_compute[185191]: 2026-01-27 16:02:12.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:02:14 compute-0 nova_compute[185191]: 2026-01-27 16:02:14.742 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:02:14 compute-0 nova_compute[185191]: 2026-01-27 16:02:14.960 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:02:14 compute-0 nova_compute[185191]: 2026-01-27 16:02:14.961 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 16:02:14 compute-0 nova_compute[185191]: 2026-01-27 16:02:14.961 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 16:02:14 compute-0 nova_compute[185191]: 2026-01-27 16:02:14.983 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 16:02:14 compute-0 nova_compute[185191]: 2026-01-27 16:02:14.984 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:02:16 compute-0 nova_compute[185191]: 2026-01-27 16:02:16.283 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:16 compute-0 nova_compute[185191]: 2026-01-27 16:02:16.709 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:21 compute-0 nova_compute[185191]: 2026-01-27 16:02:21.288 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:21 compute-0 nova_compute[185191]: 2026-01-27 16:02:21.712 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:24 compute-0 podman[263942]: 2026-01-27 16:02:24.330157394 +0000 UTC m=+0.088744792 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 16:02:26 compute-0 nova_compute[185191]: 2026-01-27 16:02:26.292 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:26 compute-0 podman[263961]: 2026-01-27 16:02:26.348617012 +0000 UTC m=+0.105859923 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052)
Jan 27 16:02:26 compute-0 podman[263963]: 2026-01-27 16:02:26.358402206 +0000 UTC m=+0.094117267 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, version=9.6, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 16:02:26 compute-0 podman[263962]: 2026-01-27 16:02:26.359890796 +0000 UTC m=+0.113849718 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 16:02:26 compute-0 nova_compute[185191]: 2026-01-27 16:02:26.718 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:27 compute-0 nova_compute[185191]: 2026-01-27 16:02:27.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:02:29 compute-0 podman[201073]: time="2026-01-27T16:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:02:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:02:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3925 "" "Go-http-client/1.1"
Jan 27 16:02:31 compute-0 nova_compute[185191]: 2026-01-27 16:02:31.297 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:31 compute-0 openstack_network_exporter[204239]: ERROR   16:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:02:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:02:31 compute-0 openstack_network_exporter[204239]: ERROR   16:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:02:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:02:31 compute-0 nova_compute[185191]: 2026-01-27 16:02:31.720 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:32 compute-0 podman[264029]: 2026-01-27 16:02:32.317607982 +0000 UTC m=+0.071033695 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 16:02:35 compute-0 podman[264050]: 2026-01-27 16:02:35.335310512 +0000 UTC m=+0.088980609 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 16:02:35 compute-0 podman[264049]: 2026-01-27 16:02:35.356705389 +0000 UTC m=+0.110452388 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 27 16:02:36 compute-0 nova_compute[185191]: 2026-01-27 16:02:36.301 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:36 compute-0 nova_compute[185191]: 2026-01-27 16:02:36.724 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:41 compute-0 nova_compute[185191]: 2026-01-27 16:02:41.304 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:41 compute-0 podman[264093]: 2026-01-27 16:02:41.308202438 +0000 UTC m=+0.059023392 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 16:02:41 compute-0 nova_compute[185191]: 2026-01-27 16:02:41.727 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:46 compute-0 nova_compute[185191]: 2026-01-27 16:02:46.309 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:46 compute-0 nova_compute[185191]: 2026-01-27 16:02:46.732 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:51 compute-0 nova_compute[185191]: 2026-01-27 16:02:51.314 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:51 compute-0 nova_compute[185191]: 2026-01-27 16:02:51.732 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:55 compute-0 podman[264117]: 2026-01-27 16:02:55.305174311 +0000 UTC m=+0.064648869 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 16:02:56 compute-0 nova_compute[185191]: 2026-01-27 16:02:56.317 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:56 compute-0 nova_compute[185191]: 2026-01-27 16:02:56.735 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:02:57 compute-0 podman[264135]: 2026-01-27 16:02:57.319053343 +0000 UTC m=+0.070534498 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Jan 27 16:02:57 compute-0 podman[264136]: 2026-01-27 16:02:57.360612781 +0000 UTC m=+0.110017420 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 16:02:57 compute-0 podman[264137]: 2026-01-27 16:02:57.368976176 +0000 UTC m=+0.103946347 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git)
Jan 27 16:02:58 compute-0 nova_compute[185191]: 2026-01-27 16:02:58.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:02:58 compute-0 nova_compute[185191]: 2026-01-27 16:02:58.992 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:02:58 compute-0 nova_compute[185191]: 2026-01-27 16:02:58.992 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:02:58 compute-0 nova_compute[185191]: 2026-01-27 16:02:58.993 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:02:58 compute-0 nova_compute[185191]: 2026-01-27 16:02:58.993 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 16:02:59 compute-0 nova_compute[185191]: 2026-01-27 16:02:59.292 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 16:02:59 compute-0 nova_compute[185191]: 2026-01-27 16:02:59.293 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5277MB free_disk=72.3396987915039GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 16:02:59 compute-0 nova_compute[185191]: 2026-01-27 16:02:59.293 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:02:59 compute-0 nova_compute[185191]: 2026-01-27 16:02:59.294 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:02:59 compute-0 nova_compute[185191]: 2026-01-27 16:02:59.359 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 16:02:59 compute-0 nova_compute[185191]: 2026-01-27 16:02:59.359 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 16:02:59 compute-0 nova_compute[185191]: 2026-01-27 16:02:59.387 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 16:02:59 compute-0 nova_compute[185191]: 2026-01-27 16:02:59.404 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 16:02:59 compute-0 nova_compute[185191]: 2026-01-27 16:02:59.405 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 16:02:59 compute-0 nova_compute[185191]: 2026-01-27 16:02:59.406 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:02:59 compute-0 podman[201073]: time="2026-01-27T16:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:02:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:02:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3926 "" "Go-http-client/1.1"
Jan 27 16:03:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:03:00.289 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:03:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:03:00.290 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:03:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:03:00.290 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:03:01 compute-0 nova_compute[185191]: 2026-01-27 16:03:01.322 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:01 compute-0 openstack_network_exporter[204239]: ERROR   16:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:03:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:03:01 compute-0 openstack_network_exporter[204239]: ERROR   16:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:03:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:03:01 compute-0 nova_compute[185191]: 2026-01-27 16:03:01.736 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:03 compute-0 podman[264194]: 2026-01-27 16:03:03.304414113 +0000 UTC m=+0.065291167 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 16:03:06 compute-0 podman[264214]: 2026-01-27 16:03:06.318588977 +0000 UTC m=+0.067654461 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 16:03:06 compute-0 nova_compute[185191]: 2026-01-27 16:03:06.325 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:06 compute-0 podman[264213]: 2026-01-27 16:03:06.331848053 +0000 UTC m=+0.082670534 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, container_name=kepler, distribution-scope=public, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.openshift.expose-services=)
Jan 27 16:03:06 compute-0 nova_compute[185191]: 2026-01-27 16:03:06.400 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:03:06 compute-0 nova_compute[185191]: 2026-01-27 16:03:06.740 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:08 compute-0 nova_compute[185191]: 2026-01-27 16:03:08.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:03:09 compute-0 nova_compute[185191]: 2026-01-27 16:03:09.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.000 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.000 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.000 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.004 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.007 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.011 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.011 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.011 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.012 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.012 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.012 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.013 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.013 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.013 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.012 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.013 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.014 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.014 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02de52a6f0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.014 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.015 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.015 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.016 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.016 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.018 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.018 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.018 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.019 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.019 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.020 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.020 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.020 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.021 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.021 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.021 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.022 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.022 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.022 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.023 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.023 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.028 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.028 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.028 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.028 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.029 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.029 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.029 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:03:11.029 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:03:11 compute-0 nova_compute[185191]: 2026-01-27 16:03:11.329 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:11 compute-0 nova_compute[185191]: 2026-01-27 16:03:11.741 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:11 compute-0 nova_compute[185191]: 2026-01-27 16:03:11.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:03:11 compute-0 nova_compute[185191]: 2026-01-27 16:03:11.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 16:03:12 compute-0 podman[264253]: 2026-01-27 16:03:12.291618227 +0000 UTC m=+0.050849028 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 16:03:13 compute-0 nova_compute[185191]: 2026-01-27 16:03:13.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:03:14 compute-0 nova_compute[185191]: 2026-01-27 16:03:14.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:03:15 compute-0 nova_compute[185191]: 2026-01-27 16:03:15.202 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:03:15 compute-0 nova_compute[185191]: 2026-01-27 16:03:15.202 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 16:03:15 compute-0 nova_compute[185191]: 2026-01-27 16:03:15.203 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 16:03:15 compute-0 nova_compute[185191]: 2026-01-27 16:03:15.250 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 16:03:16 compute-0 nova_compute[185191]: 2026-01-27 16:03:16.333 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:16 compute-0 nova_compute[185191]: 2026-01-27 16:03:16.745 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:16 compute-0 nova_compute[185191]: 2026-01-27 16:03:16.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:03:21 compute-0 nova_compute[185191]: 2026-01-27 16:03:21.336 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:21 compute-0 nova_compute[185191]: 2026-01-27 16:03:21.747 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:26 compute-0 podman[264277]: 2026-01-27 16:03:26.31261295 +0000 UTC m=+0.070921628 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 27 16:03:26 compute-0 nova_compute[185191]: 2026-01-27 16:03:26.339 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:26 compute-0 nova_compute[185191]: 2026-01-27 16:03:26.750 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:28 compute-0 podman[264294]: 2026-01-27 16:03:28.316943805 +0000 UTC m=+0.077138556 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Jan 27 16:03:28 compute-0 podman[264296]: 2026-01-27 16:03:28.336064919 +0000 UTC m=+0.087356310 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, managed_by=edpm_ansible, config_id=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, io.openshift.expose-services=, version=9.6, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Jan 27 16:03:28 compute-0 podman[264295]: 2026-01-27 16:03:28.38221693 +0000 UTC m=+0.133198613 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 27 16:03:28 compute-0 nova_compute[185191]: 2026-01-27 16:03:28.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:03:29 compute-0 podman[201073]: time="2026-01-27T16:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:03:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:03:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3926 "" "Go-http-client/1.1"
Jan 27 16:03:31 compute-0 nova_compute[185191]: 2026-01-27 16:03:31.344 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:31 compute-0 openstack_network_exporter[204239]: ERROR   16:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:03:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:03:31 compute-0 openstack_network_exporter[204239]: ERROR   16:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:03:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:03:31 compute-0 nova_compute[185191]: 2026-01-27 16:03:31.753 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:34 compute-0 podman[264356]: 2026-01-27 16:03:34.337694729 +0000 UTC m=+0.099210989 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 27 16:03:36 compute-0 nova_compute[185191]: 2026-01-27 16:03:36.348 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:36 compute-0 nova_compute[185191]: 2026-01-27 16:03:36.755 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:37 compute-0 podman[264375]: 2026-01-27 16:03:37.304022097 +0000 UTC m=+0.059930223 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, version=9.4, io.openshift.expose-services=, release=1214.1726694543, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_id=kepler, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 16:03:37 compute-0 podman[264376]: 2026-01-27 16:03:37.327612061 +0000 UTC m=+0.080710711 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 16:03:41 compute-0 nova_compute[185191]: 2026-01-27 16:03:41.353 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:41 compute-0 nova_compute[185191]: 2026-01-27 16:03:41.759 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:43 compute-0 podman[264417]: 2026-01-27 16:03:43.331814319 +0000 UTC m=+0.086427996 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 16:03:46 compute-0 nova_compute[185191]: 2026-01-27 16:03:46.357 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:46 compute-0 nova_compute[185191]: 2026-01-27 16:03:46.762 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:51 compute-0 nova_compute[185191]: 2026-01-27 16:03:51.361 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:51 compute-0 nova_compute[185191]: 2026-01-27 16:03:51.764 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:55 compute-0 nova_compute[185191]: 2026-01-27 16:03:55.335 185195 DEBUG oslo_concurrency.processutils [None req-76b0a09b-b426-4de1-8030-2625e7b21e12 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 16:03:55 compute-0 nova_compute[185191]: 2026-01-27 16:03:55.358 185195 DEBUG oslo_concurrency.processutils [None req-76b0a09b-b426-4de1-8030-2625e7b21e12 24260fb24da44b10b598f9c822c026b8 dd88ca4062da4fb9bedb3a0002a43c12 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 16:03:56 compute-0 nova_compute[185191]: 2026-01-27 16:03:56.364 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:56 compute-0 nova_compute[185191]: 2026-01-27 16:03:56.766 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:03:57 compute-0 podman[264443]: 2026-01-27 16:03:57.307088714 +0000 UTC m=+0.064483655 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 16:03:58 compute-0 nova_compute[185191]: 2026-01-27 16:03:58.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:03:59 compute-0 nova_compute[185191]: 2026-01-27 16:03:59.002 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:03:59 compute-0 nova_compute[185191]: 2026-01-27 16:03:59.004 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:03:59 compute-0 nova_compute[185191]: 2026-01-27 16:03:59.005 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:03:59 compute-0 nova_compute[185191]: 2026-01-27 16:03:59.005 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 16:03:59 compute-0 podman[264464]: 2026-01-27 16:03:59.32261139 +0000 UTC m=+0.072148521 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-type=git, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 27 16:03:59 compute-0 podman[264463]: 2026-01-27 16:03:59.351081626 +0000 UTC m=+0.106104515 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 16:03:59 compute-0 podman[264462]: 2026-01-27 16:03:59.364889057 +0000 UTC m=+0.110756200 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS)
Jan 27 16:03:59 compute-0 nova_compute[185191]: 2026-01-27 16:03:59.382 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 16:03:59 compute-0 nova_compute[185191]: 2026-01-27 16:03:59.383 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5295MB free_disk=72.3396987915039GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 16:03:59 compute-0 nova_compute[185191]: 2026-01-27 16:03:59.383 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:03:59 compute-0 nova_compute[185191]: 2026-01-27 16:03:59.383 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:03:59 compute-0 nova_compute[185191]: 2026-01-27 16:03:59.640 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 16:03:59 compute-0 nova_compute[185191]: 2026-01-27 16:03:59.641 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 16:03:59 compute-0 nova_compute[185191]: 2026-01-27 16:03:59.699 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 16:03:59 compute-0 nova_compute[185191]: 2026-01-27 16:03:59.745 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 16:03:59 compute-0 nova_compute[185191]: 2026-01-27 16:03:59.747 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 16:03:59 compute-0 nova_compute[185191]: 2026-01-27 16:03:59.748 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.365s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:03:59 compute-0 podman[201073]: time="2026-01-27T16:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:03:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:03:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3923 "" "Go-http-client/1.1"
Jan 27 16:04:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:04:00.291 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:04:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:04:00.291 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:04:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:04:00.292 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:04:01 compute-0 nova_compute[185191]: 2026-01-27 16:04:01.368 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:01 compute-0 openstack_network_exporter[204239]: ERROR   16:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:04:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:04:01 compute-0 openstack_network_exporter[204239]: ERROR   16:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:04:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:04:01 compute-0 nova_compute[185191]: 2026-01-27 16:04:01.768 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:05 compute-0 podman[264526]: 2026-01-27 16:04:05.315576338 +0000 UTC m=+0.070306162 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 27 16:04:06 compute-0 sshd-session[264546]: Invalid user sol from 45.148.10.240 port 40572
Jan 27 16:04:06 compute-0 sshd-session[264546]: Connection closed by invalid user sol 45.148.10.240 port 40572 [preauth]
Jan 27 16:04:06 compute-0 nova_compute[185191]: 2026-01-27 16:04:06.373 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:06 compute-0 nova_compute[185191]: 2026-01-27 16:04:06.742 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:04:06 compute-0 nova_compute[185191]: 2026-01-27 16:04:06.771 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:07 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:04:07.695 106793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '8e:e9:27', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:52:55:89:e6:e7'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 16:04:07 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:04:07.696 106793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 16:04:07 compute-0 nova_compute[185191]: 2026-01-27 16:04:07.696 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:08 compute-0 podman[264549]: 2026-01-27 16:04:08.30925263 +0000 UTC m=+0.062466441 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 16:04:08 compute-0 podman[264548]: 2026-01-27 16:04:08.329896286 +0000 UTC m=+0.083010164 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release=1214.1726694543, managed_by=edpm_ansible)
Jan 27 16:04:08 compute-0 nova_compute[185191]: 2026-01-27 16:04:08.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:04:10 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:04:10.699 106793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=320c7d4f-8b68-4343-92ac-19c792fa938e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 16:04:10 compute-0 nova_compute[185191]: 2026-01-27 16:04:10.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:04:11 compute-0 nova_compute[185191]: 2026-01-27 16:04:11.376 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:11 compute-0 nova_compute[185191]: 2026-01-27 16:04:11.773 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:12 compute-0 nova_compute[185191]: 2026-01-27 16:04:12.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:04:12 compute-0 nova_compute[185191]: 2026-01-27 16:04:12.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 16:04:13 compute-0 nova_compute[185191]: 2026-01-27 16:04:13.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:04:14 compute-0 podman[264589]: 2026-01-27 16:04:14.317363313 +0000 UTC m=+0.060603901 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 16:04:14 compute-0 nova_compute[185191]: 2026-01-27 16:04:14.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:04:14 compute-0 nova_compute[185191]: 2026-01-27 16:04:14.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 16:04:14 compute-0 nova_compute[185191]: 2026-01-27 16:04:14.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 16:04:14 compute-0 nova_compute[185191]: 2026-01-27 16:04:14.962 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 16:04:16 compute-0 nova_compute[185191]: 2026-01-27 16:04:16.380 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:16 compute-0 nova_compute[185191]: 2026-01-27 16:04:16.776 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:17 compute-0 nova_compute[185191]: 2026-01-27 16:04:17.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:04:21 compute-0 nova_compute[185191]: 2026-01-27 16:04:21.385 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:21 compute-0 nova_compute[185191]: 2026-01-27 16:04:21.778 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:26 compute-0 nova_compute[185191]: 2026-01-27 16:04:26.390 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:26 compute-0 nova_compute[185191]: 2026-01-27 16:04:26.783 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:29 compute-0 podman[264612]: 2026-01-27 16:04:29.317367396 +0000 UTC m=+0.056153691 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 16:04:29 compute-0 podman[201073]: time="2026-01-27T16:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:04:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:04:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3928 "" "Go-http-client/1.1"
Jan 27 16:04:29 compute-0 nova_compute[185191]: 2026-01-27 16:04:29.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:04:30 compute-0 podman[264631]: 2026-01-27 16:04:30.332853827 +0000 UTC m=+0.080171397 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052)
Jan 27 16:04:30 compute-0 podman[264633]: 2026-01-27 16:04:30.359103743 +0000 UTC m=+0.101094740 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal)
Jan 27 16:04:30 compute-0 podman[264632]: 2026-01-27 16:04:30.390813226 +0000 UTC m=+0.137998213 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 16:04:31 compute-0 nova_compute[185191]: 2026-01-27 16:04:31.393 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:31 compute-0 openstack_network_exporter[204239]: ERROR   16:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:04:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:04:31 compute-0 openstack_network_exporter[204239]: ERROR   16:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:04:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:04:31 compute-0 nova_compute[185191]: 2026-01-27 16:04:31.786 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:36 compute-0 podman[264691]: 2026-01-27 16:04:36.305139137 +0000 UTC m=+0.065566914 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 16:04:36 compute-0 nova_compute[185191]: 2026-01-27 16:04:36.397 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:36 compute-0 nova_compute[185191]: 2026-01-27 16:04:36.788 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:39 compute-0 podman[264711]: 2026-01-27 16:04:39.313571626 +0000 UTC m=+0.058344820 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 16:04:39 compute-0 podman[264710]: 2026-01-27 16:04:39.347221281 +0000 UTC m=+0.097219865 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, version=9.4, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Jan 27 16:04:41 compute-0 nova_compute[185191]: 2026-01-27 16:04:41.401 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:41 compute-0 nova_compute[185191]: 2026-01-27 16:04:41.791 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:44 compute-0 podman[264751]: 2026-01-27 16:04:44.777441703 +0000 UTC m=+0.091429310 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 16:04:46 compute-0 nova_compute[185191]: 2026-01-27 16:04:46.406 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:46 compute-0 nova_compute[185191]: 2026-01-27 16:04:46.793 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:51 compute-0 nova_compute[185191]: 2026-01-27 16:04:51.409 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:51 compute-0 nova_compute[185191]: 2026-01-27 16:04:51.797 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:56 compute-0 nova_compute[185191]: 2026-01-27 16:04:56.414 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:56 compute-0 nova_compute[185191]: 2026-01-27 16:04:56.798 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:04:59 compute-0 podman[201073]: time="2026-01-27T16:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:04:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:04:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3920 "" "Go-http-client/1.1"
Jan 27 16:04:59 compute-0 nova_compute[185191]: 2026-01-27 16:04:59.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:05:00 compute-0 nova_compute[185191]: 2026-01-27 16:05:00.027 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:05:00 compute-0 nova_compute[185191]: 2026-01-27 16:05:00.027 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:05:00 compute-0 nova_compute[185191]: 2026-01-27 16:05:00.028 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:05:00 compute-0 nova_compute[185191]: 2026-01-27 16:05:00.028 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 16:05:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:05:00.291 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:05:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:05:00.292 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:05:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:05:00.292 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:05:00 compute-0 podman[264775]: 2026-01-27 16:05:00.337351915 +0000 UTC m=+0.097983156 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 16:05:00 compute-0 nova_compute[185191]: 2026-01-27 16:05:00.351 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 16:05:00 compute-0 nova_compute[185191]: 2026-01-27 16:05:00.353 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5302MB free_disk=72.33965301513672GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 16:05:00 compute-0 nova_compute[185191]: 2026-01-27 16:05:00.353 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:05:00 compute-0 nova_compute[185191]: 2026-01-27 16:05:00.354 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:05:00 compute-0 podman[264794]: 2026-01-27 16:05:00.446788109 +0000 UTC m=+0.074875925 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 16:05:00 compute-0 nova_compute[185191]: 2026-01-27 16:05:00.486 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 16:05:00 compute-0 nova_compute[185191]: 2026-01-27 16:05:00.487 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 16:05:00 compute-0 podman[264815]: 2026-01-27 16:05:00.552142842 +0000 UTC m=+0.072343717 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, distribution-scope=public, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, release=1755695350, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41)
Jan 27 16:05:00 compute-0 podman[264814]: 2026-01-27 16:05:00.567490715 +0000 UTC m=+0.096105846 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 16:05:00 compute-0 nova_compute[185191]: 2026-01-27 16:05:00.630 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 16:05:00 compute-0 nova_compute[185191]: 2026-01-27 16:05:00.644 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 16:05:00 compute-0 nova_compute[185191]: 2026-01-27 16:05:00.645 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 16:05:00 compute-0 nova_compute[185191]: 2026-01-27 16:05:00.645 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:05:01 compute-0 nova_compute[185191]: 2026-01-27 16:05:01.416 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:01 compute-0 openstack_network_exporter[204239]: ERROR   16:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:05:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:05:01 compute-0 openstack_network_exporter[204239]: ERROR   16:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:05:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:05:01 compute-0 nova_compute[185191]: 2026-01-27 16:05:01.799 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:06 compute-0 nova_compute[185191]: 2026-01-27 16:05:06.420 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:06 compute-0 nova_compute[185191]: 2026-01-27 16:05:06.802 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:07 compute-0 podman[264861]: 2026-01-27 16:05:07.324954852 +0000 UTC m=+0.080008953 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 16:05:08 compute-0 nova_compute[185191]: 2026-01-27 16:05:08.641 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:05:09 compute-0 nova_compute[185191]: 2026-01-27 16:05:09.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:05:10 compute-0 podman[264880]: 2026-01-27 16:05:10.329068806 +0000 UTC m=+0.085820309 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, container_name=kepler, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.4, release=1214.1726694543, config_id=kepler, release-0.7.12=, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, distribution-scope=public)
Jan 27 16:05:10 compute-0 podman[264881]: 2026-01-27 16:05:10.349842445 +0000 UTC m=+0.088681866 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.001 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.001 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.001 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.002 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.004 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.004 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.006 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.006 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.007 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.006 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.007 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.007 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.008 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.008 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.008 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.008 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.009 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.009 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.010 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.009 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.010 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.010 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.011 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.011 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.011 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.011 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.012 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.012 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.012 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.012 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.011 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.012 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.012 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'network.outgoing.packets.error': [], 'disk.ephemeral.size': [], 'disk.root.size': [], 'network.incoming.packets': [], 'cpu': [], 'power.state': [], 'network.outgoing.packets': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.013 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.013 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.013 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.013 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.014 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.014 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.014 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.014 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.014 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.014 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:05:11.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:05:11 compute-0 nova_compute[185191]: 2026-01-27 16:05:11.424 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:11 compute-0 nova_compute[185191]: 2026-01-27 16:05:11.805 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:11 compute-0 nova_compute[185191]: 2026-01-27 16:05:11.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:05:13 compute-0 nova_compute[185191]: 2026-01-27 16:05:13.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:05:13 compute-0 nova_compute[185191]: 2026-01-27 16:05:13.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 16:05:14 compute-0 nova_compute[185191]: 2026-01-27 16:05:14.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:05:15 compute-0 podman[264926]: 2026-01-27 16:05:15.336226111 +0000 UTC m=+0.090145636 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 16:05:16 compute-0 nova_compute[185191]: 2026-01-27 16:05:16.427 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:16 compute-0 nova_compute[185191]: 2026-01-27 16:05:16.807 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:16 compute-0 nova_compute[185191]: 2026-01-27 16:05:16.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:05:16 compute-0 nova_compute[185191]: 2026-01-27 16:05:16.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 16:05:16 compute-0 nova_compute[185191]: 2026-01-27 16:05:16.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 16:05:16 compute-0 nova_compute[185191]: 2026-01-27 16:05:16.961 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 16:05:18 compute-0 nova_compute[185191]: 2026-01-27 16:05:18.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:05:19 compute-0 nova_compute[185191]: 2026-01-27 16:05:19.940 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:05:21 compute-0 nova_compute[185191]: 2026-01-27 16:05:21.431 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:21 compute-0 nova_compute[185191]: 2026-01-27 16:05:21.809 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:26 compute-0 nova_compute[185191]: 2026-01-27 16:05:26.435 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:26 compute-0 nova_compute[185191]: 2026-01-27 16:05:26.811 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:29 compute-0 podman[201073]: time="2026-01-27T16:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:05:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:05:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3925 "" "Go-http-client/1.1"
Jan 27 16:05:30 compute-0 nova_compute[185191]: 2026-01-27 16:05:30.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:05:31 compute-0 podman[264951]: 2026-01-27 16:05:31.339004313 +0000 UTC m=+0.088419359 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 16:05:31 compute-0 podman[264953]: 2026-01-27 16:05:31.345520389 +0000 UTC m=+0.088916983 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, release=1755695350, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Jan 27 16:05:31 compute-0 podman[264950]: 2026-01-27 16:05:31.346285219 +0000 UTC m=+0.101098960 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 27 16:05:31 compute-0 podman[264952]: 2026-01-27 16:05:31.391941077 +0000 UTC m=+0.135602748 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 16:05:31 compute-0 openstack_network_exporter[204239]: ERROR   16:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:05:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:05:31 compute-0 openstack_network_exporter[204239]: ERROR   16:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:05:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:05:31 compute-0 nova_compute[185191]: 2026-01-27 16:05:31.436 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:31 compute-0 nova_compute[185191]: 2026-01-27 16:05:31.813 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:36 compute-0 nova_compute[185191]: 2026-01-27 16:05:36.441 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:36 compute-0 nova_compute[185191]: 2026-01-27 16:05:36.816 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:38 compute-0 podman[265032]: 2026-01-27 16:05:38.300636342 +0000 UTC m=+0.061797443 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 27 16:05:41 compute-0 podman[265052]: 2026-01-27 16:05:41.308486116 +0000 UTC m=+0.063219472 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 16:05:41 compute-0 podman[265051]: 2026-01-27 16:05:41.315366681 +0000 UTC m=+0.073320693 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-type=git, container_name=kepler, release=1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., config_id=kepler, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 16:05:41 compute-0 nova_compute[185191]: 2026-01-27 16:05:41.444 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:41 compute-0 nova_compute[185191]: 2026-01-27 16:05:41.818 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:46 compute-0 podman[265093]: 2026-01-27 16:05:46.326839811 +0000 UTC m=+0.082729646 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 16:05:46 compute-0 nova_compute[185191]: 2026-01-27 16:05:46.448 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:46 compute-0 nova_compute[185191]: 2026-01-27 16:05:46.822 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:51 compute-0 nova_compute[185191]: 2026-01-27 16:05:51.452 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:51 compute-0 nova_compute[185191]: 2026-01-27 16:05:51.824 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:56 compute-0 nova_compute[185191]: 2026-01-27 16:05:56.456 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:56 compute-0 nova_compute[185191]: 2026-01-27 16:05:56.826 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:05:59 compute-0 podman[201073]: time="2026-01-27T16:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:05:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:05:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3923 "" "Go-http-client/1.1"
Jan 27 16:06:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:06:00.293 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:06:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:06:00.295 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:06:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:06:00.295 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:06:01 compute-0 openstack_network_exporter[204239]: ERROR   16:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:06:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:06:01 compute-0 openstack_network_exporter[204239]: ERROR   16:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:06:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:06:01 compute-0 nova_compute[185191]: 2026-01-27 16:06:01.458 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:01 compute-0 nova_compute[185191]: 2026-01-27 16:06:01.830 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:01 compute-0 nova_compute[185191]: 2026-01-27 16:06:01.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:06:01 compute-0 nova_compute[185191]: 2026-01-27 16:06:01.979 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:06:01 compute-0 nova_compute[185191]: 2026-01-27 16:06:01.980 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:06:01 compute-0 nova_compute[185191]: 2026-01-27 16:06:01.981 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:06:01 compute-0 nova_compute[185191]: 2026-01-27 16:06:01.981 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 16:06:02 compute-0 podman[265117]: 2026-01-27 16:06:02.379019711 +0000 UTC m=+0.121544060 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 16:06:02 compute-0 podman[265116]: 2026-01-27 16:06:02.382597468 +0000 UTC m=+0.127637904 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, maintainer=OpenStack Kubernetes Operator team)
Jan 27 16:06:02 compute-0 podman[265119]: 2026-01-27 16:06:02.39346501 +0000 UTC m=+0.129461323 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, config_id=openstack_network_exporter, version=9.6, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 16:06:02 compute-0 nova_compute[185191]: 2026-01-27 16:06:02.421 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 16:06:02 compute-0 nova_compute[185191]: 2026-01-27 16:06:02.421 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5303MB free_disk=72.33965301513672GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 16:06:02 compute-0 nova_compute[185191]: 2026-01-27 16:06:02.422 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:06:02 compute-0 nova_compute[185191]: 2026-01-27 16:06:02.422 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:06:02 compute-0 podman[265118]: 2026-01-27 16:06:02.430108985 +0000 UTC m=+0.158236056 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 16:06:02 compute-0 nova_compute[185191]: 2026-01-27 16:06:02.503 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 16:06:02 compute-0 nova_compute[185191]: 2026-01-27 16:06:02.503 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 16:06:02 compute-0 nova_compute[185191]: 2026-01-27 16:06:02.643 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 16:06:02 compute-0 nova_compute[185191]: 2026-01-27 16:06:02.661 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 16:06:02 compute-0 nova_compute[185191]: 2026-01-27 16:06:02.662 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 16:06:02 compute-0 nova_compute[185191]: 2026-01-27 16:06:02.662 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:06:06 compute-0 nova_compute[185191]: 2026-01-27 16:06:06.470 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:06 compute-0 nova_compute[185191]: 2026-01-27 16:06:06.844 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:06 compute-0 rsyslogd[235702]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 16:06:06 compute-0 rsyslogd[235702]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 16:06:08 compute-0 nova_compute[185191]: 2026-01-27 16:06:08.657 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:06:09 compute-0 podman[265195]: 2026-01-27 16:06:09.314685482 +0000 UTC m=+0.074192696 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 16:06:09 compute-0 nova_compute[185191]: 2026-01-27 16:06:09.953 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:06:11 compute-0 nova_compute[185191]: 2026-01-27 16:06:11.478 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:11 compute-0 nova_compute[185191]: 2026-01-27 16:06:11.849 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:11 compute-0 nova_compute[185191]: 2026-01-27 16:06:11.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:06:11 compute-0 nova_compute[185191]: 2026-01-27 16:06:11.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:06:11 compute-0 nova_compute[185191]: 2026-01-27 16:06:11.946 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 16:06:12 compute-0 podman[265215]: 2026-01-27 16:06:12.324617962 +0000 UTC m=+0.068127003 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, architecture=x86_64, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, distribution-scope=public, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, com.redhat.component=ubi9-container)
Jan 27 16:06:12 compute-0 podman[265216]: 2026-01-27 16:06:12.351434153 +0000 UTC m=+0.077391702 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 16:06:15 compute-0 nova_compute[185191]: 2026-01-27 16:06:15.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:06:15 compute-0 nova_compute[185191]: 2026-01-27 16:06:15.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 16:06:15 compute-0 nova_compute[185191]: 2026-01-27 16:06:15.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:06:16 compute-0 nova_compute[185191]: 2026-01-27 16:06:16.483 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:16 compute-0 nova_compute[185191]: 2026-01-27 16:06:16.853 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:17 compute-0 nova_compute[185191]: 2026-01-27 16:06:17.142 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:06:17 compute-0 nova_compute[185191]: 2026-01-27 16:06:17.142 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 16:06:17 compute-0 nova_compute[185191]: 2026-01-27 16:06:17.143 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 16:06:17 compute-0 nova_compute[185191]: 2026-01-27 16:06:17.161 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 16:06:17 compute-0 nova_compute[185191]: 2026-01-27 16:06:17.162 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:06:17 compute-0 podman[265259]: 2026-01-27 16:06:17.323626917 +0000 UTC m=+0.076514079 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 16:06:18 compute-0 nova_compute[185191]: 2026-01-27 16:06:18.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:06:21 compute-0 nova_compute[185191]: 2026-01-27 16:06:21.489 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:21 compute-0 nova_compute[185191]: 2026-01-27 16:06:21.856 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:25 compute-0 sshd-session[265284]: Invalid user sol from 45.148.10.240 port 45326
Jan 27 16:06:25 compute-0 sshd-session[265284]: Connection closed by invalid user sol 45.148.10.240 port 45326 [preauth]
Jan 27 16:06:26 compute-0 nova_compute[185191]: 2026-01-27 16:06:26.492 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:26 compute-0 nova_compute[185191]: 2026-01-27 16:06:26.859 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:29 compute-0 podman[201073]: time="2026-01-27T16:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:06:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:06:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3927 "" "Go-http-client/1.1"
Jan 27 16:06:30 compute-0 nova_compute[185191]: 2026-01-27 16:06:30.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:06:31 compute-0 openstack_network_exporter[204239]: ERROR   16:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:06:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:06:31 compute-0 openstack_network_exporter[204239]: ERROR   16:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:06:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:06:31 compute-0 nova_compute[185191]: 2026-01-27 16:06:31.495 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:31 compute-0 nova_compute[185191]: 2026-01-27 16:06:31.861 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:33 compute-0 podman[265286]: 2026-01-27 16:06:33.317956973 +0000 UTC m=+0.074419653 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute)
Jan 27 16:06:33 compute-0 podman[265289]: 2026-01-27 16:06:33.320594664 +0000 UTC m=+0.066163861 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41)
Jan 27 16:06:33 compute-0 podman[265287]: 2026-01-27 16:06:33.33793576 +0000 UTC m=+0.088816910 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 16:06:33 compute-0 podman[265288]: 2026-01-27 16:06:33.374929775 +0000 UTC m=+0.123477002 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 16:06:33 compute-0 nova_compute[185191]: 2026-01-27 16:06:33.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:06:33 compute-0 nova_compute[185191]: 2026-01-27 16:06:33.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 16:06:34 compute-0 nova_compute[185191]: 2026-01-27 16:06:34.006 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 16:06:36 compute-0 nova_compute[185191]: 2026-01-27 16:06:36.498 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:36 compute-0 nova_compute[185191]: 2026-01-27 16:06:36.863 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:40 compute-0 podman[265370]: 2026-01-27 16:06:40.309801043 +0000 UTC m=+0.067411044 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 16:06:41 compute-0 nova_compute[185191]: 2026-01-27 16:06:41.503 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:41 compute-0 nova_compute[185191]: 2026-01-27 16:06:41.865 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:43 compute-0 podman[265389]: 2026-01-27 16:06:43.320929685 +0000 UTC m=+0.057191289 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 16:06:43 compute-0 podman[265388]: 2026-01-27 16:06:43.370229101 +0000 UTC m=+0.104992205 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, version=9.4, name=ubi9, maintainer=Red Hat, Inc., config_id=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, release-0.7.12=)
Jan 27 16:06:46 compute-0 nova_compute[185191]: 2026-01-27 16:06:46.508 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:46 compute-0 nova_compute[185191]: 2026-01-27 16:06:46.867 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:48 compute-0 podman[265430]: 2026-01-27 16:06:48.314997855 +0000 UTC m=+0.066282823 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 16:06:51 compute-0 nova_compute[185191]: 2026-01-27 16:06:51.512 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:51 compute-0 nova_compute[185191]: 2026-01-27 16:06:51.869 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:56 compute-0 nova_compute[185191]: 2026-01-27 16:06:56.517 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:56 compute-0 nova_compute[185191]: 2026-01-27 16:06:56.872 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:06:59 compute-0 podman[201073]: time="2026-01-27T16:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:06:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:06:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3923 "" "Go-http-client/1.1"
Jan 27 16:07:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:07:00.294 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:07:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:07:00.295 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:07:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:07:00.296 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:07:01 compute-0 openstack_network_exporter[204239]: ERROR   16:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:07:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:07:01 compute-0 openstack_network_exporter[204239]: ERROR   16:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:07:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:07:01 compute-0 nova_compute[185191]: 2026-01-27 16:07:01.519 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:01 compute-0 nova_compute[185191]: 2026-01-27 16:07:01.874 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.008 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.037 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.037 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.037 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.038 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 16:07:04 compute-0 podman[265455]: 2026-01-27 16:07:04.33787625 +0000 UTC m=+0.094500512 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Jan 27 16:07:04 compute-0 podman[265456]: 2026-01-27 16:07:04.342318049 +0000 UTC m=+0.096726472 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 16:07:04 compute-0 podman[265458]: 2026-01-27 16:07:04.366356246 +0000 UTC m=+0.111471249 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, vcs-type=git, architecture=x86_64, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 27 16:07:04 compute-0 podman[265457]: 2026-01-27 16:07:04.412003234 +0000 UTC m=+0.161636598 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.413 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.414 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5305MB free_disk=72.33965301513672GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.415 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.415 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.481 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.482 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.497 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing inventories for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.517 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating ProviderTree inventory for provider dbf037fd-3291-487b-ae9c-69178dae2ebc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.517 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Updating inventory in ProviderTree for provider dbf037fd-3291-487b-ae9c-69178dae2ebc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.530 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing aggregate associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.550 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Refreshing trait associations for resource provider dbf037fd-3291-487b-ae9c-69178dae2ebc, traits: HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_ACCELERATORS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.572 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.585 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.587 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 16:07:04 compute-0 nova_compute[185191]: 2026-01-27 16:07:04.587 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:07:06 compute-0 nova_compute[185191]: 2026-01-27 16:07:06.523 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:06 compute-0 nova_compute[185191]: 2026-01-27 16:07:06.876 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:08 compute-0 nova_compute[185191]: 2026-01-27 16:07:08.519 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:07:10 compute-0 nova_compute[185191]: 2026-01-27 16:07:10.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.002 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.003 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.003 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.005 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.006 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.006 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.006 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.007 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.007 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.007 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.008 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.008 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.008 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.008 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.009 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.009 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.009 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.009 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.010 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.010 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.010 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.010 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.011 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.011 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.011 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.005 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.012 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.012 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.013 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.013 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.013 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.013 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.013 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.014 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.014 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.014 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.014 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.014 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.014 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.015 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.015 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.015 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.015 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.015 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.016 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.016 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.016 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.016 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.018 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.019 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.019 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.019 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.020 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.020 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.020 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.020 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.021 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.021 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.021 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.021 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:07:11.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:07:11 compute-0 podman[265538]: 2026-01-27 16:07:11.331402066 +0000 UTC m=+0.078997226 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 16:07:11 compute-0 nova_compute[185191]: 2026-01-27 16:07:11.527 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:11 compute-0 nova_compute[185191]: 2026-01-27 16:07:11.879 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:12 compute-0 nova_compute[185191]: 2026-01-27 16:07:12.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:07:14 compute-0 podman[265557]: 2026-01-27 16:07:14.328823959 +0000 UTC m=+0.084429561 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=kepler, io.openshift.tags=base rhel9, vcs-type=git, architecture=x86_64, config_id=kepler, distribution-scope=public, io.buildah.version=1.29.0, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 27 16:07:14 compute-0 podman[265558]: 2026-01-27 16:07:14.344635594 +0000 UTC m=+0.084795651 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 16:07:15 compute-0 nova_compute[185191]: 2026-01-27 16:07:15.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:07:15 compute-0 nova_compute[185191]: 2026-01-27 16:07:15.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 16:07:16 compute-0 nova_compute[185191]: 2026-01-27 16:07:16.531 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:16 compute-0 nova_compute[185191]: 2026-01-27 16:07:16.880 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:16 compute-0 nova_compute[185191]: 2026-01-27 16:07:16.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:07:16 compute-0 nova_compute[185191]: 2026-01-27 16:07:16.946 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 16:07:16 compute-0 nova_compute[185191]: 2026-01-27 16:07:16.946 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 16:07:16 compute-0 nova_compute[185191]: 2026-01-27 16:07:16.987 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 16:07:18 compute-0 nova_compute[185191]: 2026-01-27 16:07:18.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:07:18 compute-0 nova_compute[185191]: 2026-01-27 16:07:18.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:07:19 compute-0 podman[265602]: 2026-01-27 16:07:19.313095388 +0000 UTC m=+0.069820189 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 16:07:21 compute-0 nova_compute[185191]: 2026-01-27 16:07:21.534 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:21 compute-0 nova_compute[185191]: 2026-01-27 16:07:21.882 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:21 compute-0 nova_compute[185191]: 2026-01-27 16:07:21.938 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:07:26 compute-0 nova_compute[185191]: 2026-01-27 16:07:26.538 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:26 compute-0 nova_compute[185191]: 2026-01-27 16:07:26.885 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:29 compute-0 podman[201073]: time="2026-01-27T16:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:07:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:07:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3919 "" "Go-http-client/1.1"
Jan 27 16:07:30 compute-0 nova_compute[185191]: 2026-01-27 16:07:30.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:07:31 compute-0 openstack_network_exporter[204239]: ERROR   16:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:07:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:07:31 compute-0 openstack_network_exporter[204239]: ERROR   16:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:07:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:07:31 compute-0 nova_compute[185191]: 2026-01-27 16:07:31.540 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:31 compute-0 nova_compute[185191]: 2026-01-27 16:07:31.888 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:35 compute-0 podman[265627]: 2026-01-27 16:07:35.324379648 +0000 UTC m=+0.076295073 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 16:07:35 compute-0 podman[265626]: 2026-01-27 16:07:35.333704339 +0000 UTC m=+0.092125709 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Jan 27 16:07:35 compute-0 podman[265632]: 2026-01-27 16:07:35.340934643 +0000 UTC m=+0.080479145 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, release=1755695350, config_id=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.)
Jan 27 16:07:35 compute-0 podman[265628]: 2026-01-27 16:07:35.357336735 +0000 UTC m=+0.105472998 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 16:07:36 compute-0 nova_compute[185191]: 2026-01-27 16:07:36.541 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:36 compute-0 nova_compute[185191]: 2026-01-27 16:07:36.890 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:41 compute-0 nova_compute[185191]: 2026-01-27 16:07:41.545 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:41 compute-0 nova_compute[185191]: 2026-01-27 16:07:41.893 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:42 compute-0 podman[265704]: 2026-01-27 16:07:42.304576705 +0000 UTC m=+0.062443091 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 16:07:44 compute-0 podman[265725]: 2026-01-27 16:07:44.754926595 +0000 UTC m=+0.077121625 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 16:07:44 compute-0 podman[265724]: 2026-01-27 16:07:44.77519185 +0000 UTC m=+0.105264972 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vendor=Red Hat, Inc., architecture=x86_64, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, distribution-scope=public, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543)
Jan 27 16:07:46 compute-0 nova_compute[185191]: 2026-01-27 16:07:46.548 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:46 compute-0 nova_compute[185191]: 2026-01-27 16:07:46.896 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:50 compute-0 podman[265767]: 2026-01-27 16:07:50.318891604 +0000 UTC m=+0.068822562 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 16:07:51 compute-0 nova_compute[185191]: 2026-01-27 16:07:51.553 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:51 compute-0 nova_compute[185191]: 2026-01-27 16:07:51.899 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:56 compute-0 nova_compute[185191]: 2026-01-27 16:07:56.557 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:56 compute-0 nova_compute[185191]: 2026-01-27 16:07:56.901 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:07:59 compute-0 podman[201073]: time="2026-01-27T16:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:07:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:07:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3926 "" "Go-http-client/1.1"
Jan 27 16:08:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:08:00.295 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:08:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:08:00.295 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:08:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:08:00.295 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:08:01 compute-0 openstack_network_exporter[204239]: ERROR   16:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:08:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:08:01 compute-0 openstack_network_exporter[204239]: ERROR   16:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:08:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:08:01 compute-0 nova_compute[185191]: 2026-01-27 16:08:01.559 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:01 compute-0 nova_compute[185191]: 2026-01-27 16:08:01.904 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:04 compute-0 nova_compute[185191]: 2026-01-27 16:08:04.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:08:04 compute-0 nova_compute[185191]: 2026-01-27 16:08:04.973 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:08:04 compute-0 nova_compute[185191]: 2026-01-27 16:08:04.973 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:08:04 compute-0 nova_compute[185191]: 2026-01-27 16:08:04.974 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:08:04 compute-0 nova_compute[185191]: 2026-01-27 16:08:04.974 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 16:08:05 compute-0 nova_compute[185191]: 2026-01-27 16:08:05.288 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 16:08:05 compute-0 nova_compute[185191]: 2026-01-27 16:08:05.289 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5307MB free_disk=72.33969116210938GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 16:08:05 compute-0 nova_compute[185191]: 2026-01-27 16:08:05.289 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:08:05 compute-0 nova_compute[185191]: 2026-01-27 16:08:05.290 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:08:05 compute-0 nova_compute[185191]: 2026-01-27 16:08:05.348 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 16:08:05 compute-0 nova_compute[185191]: 2026-01-27 16:08:05.348 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 16:08:05 compute-0 nova_compute[185191]: 2026-01-27 16:08:05.368 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 16:08:05 compute-0 nova_compute[185191]: 2026-01-27 16:08:05.383 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 16:08:05 compute-0 nova_compute[185191]: 2026-01-27 16:08:05.385 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 16:08:05 compute-0 nova_compute[185191]: 2026-01-27 16:08:05.385 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:08:06 compute-0 podman[265791]: 2026-01-27 16:08:06.31216867 +0000 UTC m=+0.059426789 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 16:08:06 compute-0 podman[265798]: 2026-01-27 16:08:06.33337183 +0000 UTC m=+0.074087693 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64)
Jan 27 16:08:06 compute-0 podman[265790]: 2026-01-27 16:08:06.343102812 +0000 UTC m=+0.096353002 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126)
Jan 27 16:08:06 compute-0 podman[265792]: 2026-01-27 16:08:06.353468711 +0000 UTC m=+0.094385409 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 16:08:06 compute-0 nova_compute[185191]: 2026-01-27 16:08:06.561 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:06 compute-0 nova_compute[185191]: 2026-01-27 16:08:06.907 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:08 compute-0 nova_compute[185191]: 2026-01-27 16:08:08.381 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:08:11 compute-0 nova_compute[185191]: 2026-01-27 16:08:11.565 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:11 compute-0 nova_compute[185191]: 2026-01-27 16:08:11.910 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:12 compute-0 nova_compute[185191]: 2026-01-27 16:08:12.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:08:13 compute-0 podman[265874]: 2026-01-27 16:08:13.324347768 +0000 UTC m=+0.072760998 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Jan 27 16:08:13 compute-0 nova_compute[185191]: 2026-01-27 16:08:13.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:08:15 compute-0 podman[265895]: 2026-01-27 16:08:15.378571596 +0000 UTC m=+0.110788241 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 16:08:15 compute-0 podman[265894]: 2026-01-27 16:08:15.391216396 +0000 UTC m=+0.127729847 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, name=ubi9, distribution-scope=public, io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=kepler, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543)
Jan 27 16:08:15 compute-0 nova_compute[185191]: 2026-01-27 16:08:15.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:08:15 compute-0 nova_compute[185191]: 2026-01-27 16:08:15.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 16:08:16 compute-0 nova_compute[185191]: 2026-01-27 16:08:16.569 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:16 compute-0 nova_compute[185191]: 2026-01-27 16:08:16.913 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:16 compute-0 nova_compute[185191]: 2026-01-27 16:08:16.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:08:16 compute-0 nova_compute[185191]: 2026-01-27 16:08:16.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 16:08:16 compute-0 nova_compute[185191]: 2026-01-27 16:08:16.946 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 16:08:16 compute-0 nova_compute[185191]: 2026-01-27 16:08:16.961 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 16:08:18 compute-0 nova_compute[185191]: 2026-01-27 16:08:18.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:08:19 compute-0 nova_compute[185191]: 2026-01-27 16:08:19.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:08:21 compute-0 podman[265937]: 2026-01-27 16:08:21.308676121 +0000 UTC m=+0.068943435 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 16:08:21 compute-0 nova_compute[185191]: 2026-01-27 16:08:21.573 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:21 compute-0 nova_compute[185191]: 2026-01-27 16:08:21.916 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:26 compute-0 nova_compute[185191]: 2026-01-27 16:08:26.579 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:26 compute-0 nova_compute[185191]: 2026-01-27 16:08:26.918 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:29 compute-0 podman[201073]: time="2026-01-27T16:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:08:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:08:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3928 "" "Go-http-client/1.1"
Jan 27 16:08:30 compute-0 nova_compute[185191]: 2026-01-27 16:08:30.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:08:31 compute-0 openstack_network_exporter[204239]: ERROR   16:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:08:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:08:31 compute-0 openstack_network_exporter[204239]: ERROR   16:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:08:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:08:31 compute-0 nova_compute[185191]: 2026-01-27 16:08:31.581 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:31 compute-0 nova_compute[185191]: 2026-01-27 16:08:31.919 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:36 compute-0 nova_compute[185191]: 2026-01-27 16:08:36.585 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:36 compute-0 nova_compute[185191]: 2026-01-27 16:08:36.922 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:37 compute-0 podman[265961]: 2026-01-27 16:08:37.317045224 +0000 UTC m=+0.063463718 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 16:08:37 compute-0 podman[265960]: 2026-01-27 16:08:37.334891274 +0000 UTC m=+0.084744290 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 16:08:37 compute-0 podman[265963]: 2026-01-27 16:08:37.367694006 +0000 UTC m=+0.103110404 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.33.7, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 16:08:37 compute-0 podman[265962]: 2026-01-27 16:08:37.37341103 +0000 UTC m=+0.111285254 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 16:08:41 compute-0 nova_compute[185191]: 2026-01-27 16:08:41.589 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:41 compute-0 nova_compute[185191]: 2026-01-27 16:08:41.925 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:42 compute-0 sshd-session[266038]: Invalid user sol from 45.148.10.240 port 50184
Jan 27 16:08:42 compute-0 sshd-session[266038]: Connection closed by invalid user sol 45.148.10.240 port 50184 [preauth]
Jan 27 16:08:44 compute-0 podman[266040]: 2026-01-27 16:08:44.304504657 +0000 UTC m=+0.066516740 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 16:08:46 compute-0 podman[266061]: 2026-01-27 16:08:46.332541168 +0000 UTC m=+0.085293544 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 16:08:46 compute-0 podman[266060]: 2026-01-27 16:08:46.338258032 +0000 UTC m=+0.095690824 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_id=kepler, distribution-scope=public)
Jan 27 16:08:46 compute-0 nova_compute[185191]: 2026-01-27 16:08:46.592 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:46 compute-0 nova_compute[185191]: 2026-01-27 16:08:46.926 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:51 compute-0 nova_compute[185191]: 2026-01-27 16:08:51.596 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:51 compute-0 nova_compute[185191]: 2026-01-27 16:08:51.929 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:52 compute-0 podman[266105]: 2026-01-27 16:08:52.342867793 +0000 UTC m=+0.096603419 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 16:08:56 compute-0 nova_compute[185191]: 2026-01-27 16:08:56.601 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:56 compute-0 nova_compute[185191]: 2026-01-27 16:08:56.931 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:08:59 compute-0 podman[201073]: time="2026-01-27T16:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:08:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:08:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3923 "" "Go-http-client/1.1"
Jan 27 16:09:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:09:00.297 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:09:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:09:00.297 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:09:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:09:00.298 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:09:01 compute-0 openstack_network_exporter[204239]: ERROR   16:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:09:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:09:01 compute-0 openstack_network_exporter[204239]: ERROR   16:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:09:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:09:01 compute-0 nova_compute[185191]: 2026-01-27 16:09:01.602 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:01 compute-0 nova_compute[185191]: 2026-01-27 16:09:01.932 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:05 compute-0 nova_compute[185191]: 2026-01-27 16:09:05.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:09:06 compute-0 nova_compute[185191]: 2026-01-27 16:09:06.273 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:09:06 compute-0 nova_compute[185191]: 2026-01-27 16:09:06.273 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:09:06 compute-0 nova_compute[185191]: 2026-01-27 16:09:06.273 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:09:06 compute-0 nova_compute[185191]: 2026-01-27 16:09:06.273 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 16:09:06 compute-0 nova_compute[185191]: 2026-01-27 16:09:06.605 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:06 compute-0 nova_compute[185191]: 2026-01-27 16:09:06.645 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 16:09:06 compute-0 nova_compute[185191]: 2026-01-27 16:09:06.646 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5294MB free_disk=72.33969116210938GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 16:09:06 compute-0 nova_compute[185191]: 2026-01-27 16:09:06.647 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:09:06 compute-0 nova_compute[185191]: 2026-01-27 16:09:06.647 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:09:06 compute-0 nova_compute[185191]: 2026-01-27 16:09:06.924 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 16:09:06 compute-0 nova_compute[185191]: 2026-01-27 16:09:06.925 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 16:09:06 compute-0 nova_compute[185191]: 2026-01-27 16:09:06.935 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:06 compute-0 nova_compute[185191]: 2026-01-27 16:09:06.951 185195 DEBUG nova.compute.provider_tree [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed in ProviderTree for provider: dbf037fd-3291-487b-ae9c-69178dae2ebc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 16:09:06 compute-0 nova_compute[185191]: 2026-01-27 16:09:06.966 185195 DEBUG nova.scheduler.client.report [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Inventory has not changed for provider dbf037fd-3291-487b-ae9c-69178dae2ebc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 16:09:06 compute-0 nova_compute[185191]: 2026-01-27 16:09:06.968 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 16:09:06 compute-0 nova_compute[185191]: 2026-01-27 16:09:06.968 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:09:08 compute-0 podman[266129]: 2026-01-27 16:09:08.315582037 +0000 UTC m=+0.067779284 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 16:09:08 compute-0 podman[266130]: 2026-01-27 16:09:08.328292899 +0000 UTC m=+0.076307363 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 27 16:09:08 compute-0 podman[266137]: 2026-01-27 16:09:08.355367617 +0000 UTC m=+0.093131236 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 27 16:09:08 compute-0 podman[266131]: 2026-01-27 16:09:08.372702423 +0000 UTC m=+0.115483987 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 16:09:08 compute-0 nova_compute[185191]: 2026-01-27 16:09:08.963 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.003 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.004 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.004 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f02d93b27e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.006 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2870>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.006 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b1880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.007 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b3080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.007 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b28d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.007 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2930>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.007 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f02d93b2840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.008 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b19d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.009 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f02d93b0e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.010 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02dc6bc200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.011 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da622270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.011 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02da63a2a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.011 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2ae0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.012 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2b40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.012 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.013 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b04d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.013 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.014 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b05f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.014 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.014 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2e10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.015 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2630>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.015 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.016 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.016 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.017 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.017 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b2750>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.017 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b0770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.018 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f02d93b27b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f02d938ea20>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.010 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f02d93b3050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.019 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f02d93b28a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.019 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f02d93b2900>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.019 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f02d93b0470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.020 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f02d93b0e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.020 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f02d93b3320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.020 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f02d93b0530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.021 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f02d93b2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.021 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f02d93b2b10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.021 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f02d93b07a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.021 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f02d93b0710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.022 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f02d93b2b70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.022 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f02d93b05c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.022 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f02d93b0ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.023 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f02d93b0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.023 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f02d93b2540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.023 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f02d93b0650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.024 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f02d93b2660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.024 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f02d93b06b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.024 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f02d93b26c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.024 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f02d93b2720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.025 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f02d93b0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.025 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f02d93b2780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f02da4c5eb0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.025 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.028 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.028 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.028 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.028 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.028 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.028 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 ceilometer_agent_compute[194902]: 2026-01-27 16:09:11.028 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 16:09:11 compute-0 nova_compute[185191]: 2026-01-27 16:09:11.609 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:11 compute-0 nova_compute[185191]: 2026-01-27 16:09:11.937 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:14 compute-0 podman[266207]: 2026-01-27 16:09:14.816041152 +0000 UTC m=+0.116823863 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 27 16:09:14 compute-0 nova_compute[185191]: 2026-01-27 16:09:14.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:09:15 compute-0 nova_compute[185191]: 2026-01-27 16:09:15.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:09:15 compute-0 nova_compute[185191]: 2026-01-27 16:09:15.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:09:15 compute-0 nova_compute[185191]: 2026-01-27 16:09:15.944 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 16:09:16 compute-0 nova_compute[185191]: 2026-01-27 16:09:16.616 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:16 compute-0 nova_compute[185191]: 2026-01-27 16:09:16.939 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:17 compute-0 podman[266227]: 2026-01-27 16:09:17.342353906 +0000 UTC m=+0.097613036 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 16:09:17 compute-0 podman[266226]: 2026-01-27 16:09:17.371104749 +0000 UTC m=+0.133037039 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.openshift.expose-services=, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, config_id=kepler)
Jan 27 16:09:18 compute-0 nova_compute[185191]: 2026-01-27 16:09:18.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:09:18 compute-0 nova_compute[185191]: 2026-01-27 16:09:18.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 16:09:18 compute-0 nova_compute[185191]: 2026-01-27 16:09:18.945 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 16:09:18 compute-0 nova_compute[185191]: 2026-01-27 16:09:18.976 185195 DEBUG nova.compute.manager [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 16:09:20 compute-0 nova_compute[185191]: 2026-01-27 16:09:20.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:09:21 compute-0 nova_compute[185191]: 2026-01-27 16:09:21.620 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:21 compute-0 nova_compute[185191]: 2026-01-27 16:09:21.942 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:21 compute-0 nova_compute[185191]: 2026-01-27 16:09:21.945 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:09:23 compute-0 podman[266269]: 2026-01-27 16:09:23.310191347 +0000 UTC m=+0.058306960 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 16:09:25 compute-0 nova_compute[185191]: 2026-01-27 16:09:25.939 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:09:26 compute-0 nova_compute[185191]: 2026-01-27 16:09:26.626 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:26 compute-0 nova_compute[185191]: 2026-01-27 16:09:26.943 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:29 compute-0 podman[201073]: time="2026-01-27T16:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:09:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:09:29 compute-0 podman[201073]: @ - - [27/Jan/2026:16:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3927 "" "Go-http-client/1.1"
Jan 27 16:09:31 compute-0 openstack_network_exporter[204239]: ERROR   16:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:09:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:09:31 compute-0 openstack_network_exporter[204239]: ERROR   16:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:09:31 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:09:31 compute-0 nova_compute[185191]: 2026-01-27 16:09:31.628 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:31 compute-0 nova_compute[185191]: 2026-01-27 16:09:31.946 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:32 compute-0 nova_compute[185191]: 2026-01-27 16:09:32.944 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:09:36 compute-0 nova_compute[185191]: 2026-01-27 16:09:36.631 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:36 compute-0 nova_compute[185191]: 2026-01-27 16:09:36.949 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:39 compute-0 podman[266294]: 2026-01-27 16:09:39.318850474 +0000 UTC m=+0.075801799 container health_status ae1427fbbf78feef1183329cfb777fff470ba256751b8449158366a3b21bea3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 16:09:39 compute-0 podman[266293]: 2026-01-27 16:09:39.325407091 +0000 UTC m=+0.084447272 container health_status 873bff16cccb7285c8ea9a1c183dfda0cab73a7648942e4f2515cc8086c27088 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=ba8fbe74d58b22a20288dc92edc33052, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 16:09:39 compute-0 podman[266296]: 2026-01-27 16:09:39.367187834 +0000 UTC m=+0.113023670 container health_status f2e356a7964dbf4353fbeef787d0afabe388f753f88a2195661dc952ab6b6931 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, architecture=x86_64, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350)
Jan 27 16:09:39 compute-0 podman[266295]: 2026-01-27 16:09:39.377016259 +0000 UTC m=+0.115207120 container health_status e3f8f1a577281ebf18e53a2e9351362ca9019f857267593b4af327d46bf6e014 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 16:09:41 compute-0 nova_compute[185191]: 2026-01-27 16:09:41.636 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:41 compute-0 nova_compute[185191]: 2026-01-27 16:09:41.951 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:42 compute-0 sshd-session[266376]: Received disconnect from 45.148.10.157 port 15858:11:  [preauth]
Jan 27 16:09:42 compute-0 sshd-session[266376]: Disconnected from authenticating user root 45.148.10.157 port 15858 [preauth]
Jan 27 16:09:45 compute-0 podman[266378]: 2026-01-27 16:09:45.371844195 +0000 UTC m=+0.133560893 container health_status 3e239cbe4fe4ea123cbee45bbac230483a32017d7c2dc79e846b527df37cbcc4 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'cfb7c532cce220a35f451ac91eec5d497ded2900f1a70c5df8a92546b4e35cbb-835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 16:09:46 compute-0 nova_compute[185191]: 2026-01-27 16:09:46.640 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:46 compute-0 nova_compute[185191]: 2026-01-27 16:09:46.953 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:47 compute-0 sshd-session[266398]: Accepted publickey for zuul from 192.168.122.10 port 38088 ssh2: ECDSA SHA256:eiMUgn66BDCEzWGZn7m4DwA+o7QEjafZRGvhqXAM8Uo
Jan 27 16:09:47 compute-0 systemd-logind[820]: New session 35 of user zuul.
Jan 27 16:09:47 compute-0 systemd[1]: Started Session 35 of User zuul.
Jan 27 16:09:47 compute-0 sshd-session[266398]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 16:09:47 compute-0 podman[266402]: 2026-01-27 16:09:47.646979542 +0000 UTC m=+0.075692536 container health_status 34d627b2c746d7a3ccb200e58da3f1c3e0967e76cb79f0abe9105c6164a40cf1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 16:09:47 compute-0 podman[266400]: 2026-01-27 16:09:47.656874888 +0000 UTC m=+0.089426386 container health_status 0cffad1bf2bbc06a559695436f4ea46be27f4eb816188a71e8937806af598039 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, version=9.4, maintainer=Red Hat, Inc., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, name=ubi9, distribution-scope=public, vendor=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container)
Jan 27 16:09:47 compute-0 sudo[266445]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 27 16:09:47 compute-0 sudo[266445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 16:09:51 compute-0 nova_compute[185191]: 2026-01-27 16:09:51.645 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:51 compute-0 nova_compute[185191]: 2026-01-27 16:09:51.960 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:52 compute-0 ovs-vsctl[266612]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 27 16:09:53 compute-0 virtqemud[184937]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 27 16:09:53 compute-0 virtqemud[184937]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 27 16:09:53 compute-0 virtqemud[184937]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 27 16:09:54 compute-0 podman[266800]: 2026-01-27 16:09:54.299054514 +0000 UTC m=+0.089055246 container health_status b3cab1bc4a89ac469c94b6178c245218b62f0c24c20aa8adcbaa4491f5b26bf7 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '835baedf917b672c35cb1acfa01d6329e2503a53e8bc988d481b7e963a0921ae-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 16:09:55 compute-0 crontab[267037]: (root) LIST (root)
Jan 27 16:09:56 compute-0 nova_compute[185191]: 2026-01-27 16:09:56.649 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:56 compute-0 nova_compute[185191]: 2026-01-27 16:09:56.964 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:09:57 compute-0 systemd[1]: Starting Hostname Service...
Jan 27 16:09:57 compute-0 systemd[1]: Started Hostname Service.
Jan 27 16:09:59 compute-0 podman[201073]: time="2026-01-27T16:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 16:09:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 16:09:59 compute-0 podman[201073]: @ - - [27/Jan/2026:16:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3923 "" "Go-http-client/1.1"
Jan 27 16:10:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:10:00.298 106793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:10:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:10:00.300 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:10:00 compute-0 ovn_metadata_agent[106788]: 2026-01-27 16:10:00.300 106793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:10:01 compute-0 openstack_network_exporter[204239]: ERROR   16:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 16:10:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:10:01 compute-0 openstack_network_exporter[204239]: ERROR   16:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 16:10:01 compute-0 openstack_network_exporter[204239]: 
Jan 27 16:10:01 compute-0 nova_compute[185191]: 2026-01-27 16:10:01.653 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:10:01 compute-0 nova_compute[185191]: 2026-01-27 16:10:01.967 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:10:06 compute-0 ovs-appctl[268317]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 27 16:10:06 compute-0 ovs-appctl[268321]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 27 16:10:06 compute-0 ovs-appctl[268326]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 27 16:10:06 compute-0 nova_compute[185191]: 2026-01-27 16:10:06.655 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:10:06 compute-0 nova_compute[185191]: 2026-01-27 16:10:06.943 185195 DEBUG oslo_service.periodic_task [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 16:10:06 compute-0 nova_compute[185191]: 2026-01-27 16:10:06.969 185195 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 16:10:06 compute-0 nova_compute[185191]: 2026-01-27 16:10:06.971 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:10:06 compute-0 nova_compute[185191]: 2026-01-27 16:10:06.971 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 16:10:06 compute-0 nova_compute[185191]: 2026-01-27 16:10:06.971 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 16:10:06 compute-0 nova_compute[185191]: 2026-01-27 16:10:06.971 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 16:10:07 compute-0 nova_compute[185191]: 2026-01-27 16:10:07.272 185195 WARNING nova.virt.libvirt.driver [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 16:10:07 compute-0 nova_compute[185191]: 2026-01-27 16:10:07.273 185195 DEBUG nova.compute.resource_tracker [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4968MB free_disk=72.07421493530273GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 16:10:07 compute-0 nova_compute[185191]: 2026-01-27 16:10:07.274 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 16:10:07 compute-0 nova_compute[185191]: 2026-01-27 16:10:07.274 185195 DEBUG oslo_concurrency.lockutils [None req-d070de9d-c5b2-4045-a633-be032e876441 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
